Test Report: Docker_Linux_crio_arm64 21923

                    
                      0ff1edca1acc03f8c3eb691c9cf9caebdbe6133d:2025-11-20:42417
                    
                

Test fail (41/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.66
35 TestAddons/parallel/Registry 17.09
36 TestAddons/parallel/RegistryCreds 0.71
37 TestAddons/parallel/Ingress 145.41
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.52
41 TestAddons/parallel/CSI 46.46
42 TestAddons/parallel/Headlamp 3.27
43 TestAddons/parallel/CloudSpanner 6.31
44 TestAddons/parallel/LocalPath 8.52
45 TestAddons/parallel/NvidiaDevicePlugin 6.29
46 TestAddons/parallel/Yakd 6.26
97 TestFunctional/parallel/ServiceCmdConnect 603.55
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.27
128 TestFunctional/parallel/ServiceCmd/DeployApp 600.84
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
147 TestFunctional/parallel/ServiceCmd/Format 0.4
148 TestFunctional/parallel/ServiceCmd/URL 0.41
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 448.34
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.46
177 TestMultiControlPlane/serial/RestartCluster 369.56
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.36
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.2
191 TestJSONOutput/pause/Command 1.85
197 TestJSONOutput/unpause/Command 1.71
282 TestPause/serial/Pause 7.43
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.61
304 TestStartStop/group/old-k8s-version/serial/Pause 6.71
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.65
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.04
322 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.33
328 TestStartStop/group/embed-certs/serial/Pause 8.03
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.39
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.56
342 TestStartStop/group/newest-cni/serial/Pause 6.4
349 TestStartStop/group/no-preload/serial/Pause 6.79
x
+
TestAddons/serial/Volcano (0.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable volcano --alsologtostderr -v=1: exit status 11 (657.781206ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:13:37.510943  843522 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:13:37.511808  843522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:13:37.511834  843522 out.go:374] Setting ErrFile to fd 2...
	I1120 21:13:37.511839  843522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:13:37.512128  843522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:13:37.512485  843522 mustload.go:66] Loading cluster: addons-828342
	I1120 21:13:37.512915  843522 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:13:37.512973  843522 addons.go:607] checking whether the cluster is paused
	I1120 21:13:37.513104  843522 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:13:37.513122  843522 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:13:37.513581  843522 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:13:37.550160  843522 ssh_runner.go:195] Run: systemctl --version
	I1120 21:13:37.550224  843522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:13:37.569136  843522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:13:37.673761  843522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:13:37.673848  843522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:13:37.704625  843522 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:13:37.704648  843522 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:13:37.704654  843522 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:13:37.704658  843522 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:13:37.704672  843522 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:13:37.704677  843522 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:13:37.704681  843522 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:13:37.704684  843522 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:13:37.704688  843522 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:13:37.704694  843522 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:13:37.704698  843522 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:13:37.704702  843522 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:13:37.704705  843522 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:13:37.704708  843522 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:13:37.704711  843522 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:13:37.704716  843522 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:13:37.704725  843522 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:13:37.704729  843522 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:13:37.704732  843522 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:13:37.704735  843522 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:13:37.704752  843522 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:13:37.704760  843522 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:13:37.704764  843522 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:13:37.704767  843522 cri.go:89] found id: ""
	I1120 21:13:37.704830  843522 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:13:37.719683  843522 out.go:203] 
	W1120 21:13:37.721631  843522 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:13:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:13:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:13:37.721660  843522 out.go:285] * 
	* 
	W1120 21:13:38.077943  843522 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:13:38.079283  843522 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.66s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.32066ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003828483s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.008073552s
addons_test.go:392: (dbg) Run:  kubectl --context addons-828342 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-828342 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-828342 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.49856676s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 ip
2025/11/20 21:14:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable registry --alsologtostderr -v=1: exit status 11 (269.057166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:05.217929  844011 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:05.218688  844011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:05.218728  844011 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:05.218753  844011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:05.219084  844011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:05.219456  844011 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:05.219879  844011 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:05.219922  844011 addons.go:607] checking whether the cluster is paused
	I1120 21:14:05.220054  844011 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:05.220089  844011 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:05.220647  844011 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:05.242582  844011 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:05.242800  844011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:05.265318  844011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:05.365838  844011 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:05.365922  844011 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:05.405012  844011 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:05.405038  844011 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:05.405043  844011 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:05.405047  844011 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:05.405052  844011 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:05.405063  844011 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:05.405088  844011 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:05.405097  844011 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:05.405100  844011 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:05.405107  844011 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:05.405116  844011 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:05.405120  844011 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:05.405123  844011 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:05.405126  844011 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:05.405129  844011 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:05.405141  844011 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:05.405163  844011 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:05.405181  844011 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:05.405190  844011 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:05.405193  844011 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:05.405200  844011 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:05.405209  844011 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:05.405212  844011 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:05.405216  844011 cri.go:89] found id: ""
	I1120 21:14:05.405279  844011 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:05.418629  844011 out.go:203] 
	W1120 21:14:05.419959  844011 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:05.419985  844011 out.go:285] * 
	* 
	W1120 21:14:05.428065  844011 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:05.429486  844011 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (17.09s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 15.601274ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-828342
addons_test.go:332: (dbg) Run:  kubectl --context addons-828342 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (323.775498ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:42.066816  845772 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:42.067788  845772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:42.067847  845772 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:42.067871  845772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:42.068205  845772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:42.068579  845772 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:42.069065  845772 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:42.069116  845772 addons.go:607] checking whether the cluster is paused
	I1120 21:14:42.069269  845772 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:42.069304  845772 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:42.069844  845772 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:42.099790  845772 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:42.099862  845772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:42.123727  845772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:42.233022  845772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:42.233183  845772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:42.280157  845772 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:42.280188  845772 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:42.280194  845772 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:42.280198  845772 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:42.280202  845772 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:42.280206  845772 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:42.280209  845772 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:42.280212  845772 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:42.280215  845772 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:42.280223  845772 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:42.280226  845772 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:42.280229  845772 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:42.280232  845772 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:42.280235  845772 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:42.280239  845772 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:42.280246  845772 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:42.280253  845772 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:42.280258  845772 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:42.280261  845772 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:42.280264  845772 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:42.280269  845772 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:42.280275  845772 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:42.280278  845772 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:42.280281  845772 cri.go:89] found id: ""
	I1120 21:14:42.280349  845772 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:42.308282  845772 out.go:203] 
	W1120 21:14:42.311488  845772 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:42.311531  845772 out.go:285] * 
	* 
	W1120 21:14:42.319946  845772 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:42.323340  845772 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-828342 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-828342 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-828342 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4a8b55a3-2bfd-49ab-83f6-c8fce0cc11f7] Pending
helpers_test.go:352: "nginx" [4a8b55a3-2bfd-49ab-83f6-c8fce0cc11f7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003649016s
I1120 21:14:51.984214  836852 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.354727562s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-828342 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-828342
helpers_test.go:243: (dbg) docker inspect addons-828342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f",
	        "Created": "2025-11-20T21:11:16.147726375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 838012,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:11:16.207148163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f/hostname",
	        "HostsPath": "/var/lib/docker/containers/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f/hosts",
	        "LogPath": "/var/lib/docker/containers/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f-json.log",
	        "Name": "/addons-828342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-828342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-828342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f",
	                "LowerDir": "/var/lib/docker/overlay2/9053ca37a57a4f0c5e44cc17d517c8f65999e580d22fddc3f525ff3c20a90aad-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9053ca37a57a4f0c5e44cc17d517c8f65999e580d22fddc3f525ff3c20a90aad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9053ca37a57a4f0c5e44cc17d517c8f65999e580d22fddc3f525ff3c20a90aad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9053ca37a57a4f0c5e44cc17d517c8f65999e580d22fddc3f525ff3c20a90aad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-828342",
	                "Source": "/var/lib/docker/volumes/addons-828342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-828342",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-828342",
	                "name.minikube.sigs.k8s.io": "addons-828342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "180619f4b7334bfb68e04222525e77dcf9ddaa6ac5dc79f2e8b408d065282995",
	            "SandboxKey": "/var/run/docker/netns/180619f4b733",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-828342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:4e:db:48:e9:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69692cf3204c643c9d16d84d2f480a3beb892e409b320e951e971b06bb156b0",
	                    "EndpointID": "9572861d8caf39d0a4902b1cffc6bbabe57b872a7e7008061b7c731d065ec257",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-828342",
	                        "457f849792bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-828342 -n addons-828342
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-828342 logs -n 25: (1.522985715s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-294137                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-294137 │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ --download-only -p binary-mirror-490692 --alsologtostderr --binary-mirror http://127.0.0.1:37155 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-490692   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	│ delete  │ -p binary-mirror-490692                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-490692   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ addons  │ disable dashboard -p addons-828342                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	│ addons  │ enable dashboard -p addons-828342                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	│ start   │ -p addons-828342 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:13 UTC │
	│ addons  │ addons-828342 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │                     │
	│ addons  │ addons-828342 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │                     │
	│ addons  │ addons-828342 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │                     │
	│ ip      │ addons-828342 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ addons  │ addons-828342 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ addons-828342 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ addons-828342 ssh cat /opt/local-path-provisioner/pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ addons  │ addons-828342 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ addons-828342 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ enable headlamp -p addons-828342 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ addons-828342 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ addons-828342 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ addons-828342 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ addons-828342 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ addons-828342 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-828342                                                                                                                                                                                                                                                                                                                                                                                           │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ addons  │ addons-828342 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ addons-828342 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ip      │ addons-828342 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:17 UTC │ 20 Nov 25 21:17 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:10:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:10:50.054958  837622 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:10:50.055241  837622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:50.055250  837622 out.go:374] Setting ErrFile to fd 2...
	I1120 21:10:50.055255  837622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:50.055628  837622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:10:50.056187  837622 out.go:368] Setting JSON to false
	I1120 21:10:50.057117  837622 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13975,"bootTime":1763659075,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:10:50.057218  837622 start.go:143] virtualization:  
	I1120 21:10:50.060740  837622 out.go:179] * [addons-828342] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:10:50.064443  837622 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:10:50.064592  837622 notify.go:221] Checking for updates...
	I1120 21:10:50.070409  837622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:10:50.073470  837622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:10:50.076316  837622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:10:50.079258  837622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:10:50.082099  837622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:10:50.085178  837622 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:10:50.113578  837622 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:10:50.113706  837622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:50.180368  837622 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-20 21:10:50.170691246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:50.180484  837622 docker.go:319] overlay module found
	I1120 21:10:50.183511  837622 out.go:179] * Using the docker driver based on user configuration
	I1120 21:10:50.186261  837622 start.go:309] selected driver: docker
	I1120 21:10:50.186285  837622 start.go:930] validating driver "docker" against <nil>
	I1120 21:10:50.186300  837622 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:10:50.187053  837622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:50.249914  837622 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-20 21:10:50.240979965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:50.250073  837622 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:10:50.250322  837622 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:10:50.253259  837622 out.go:179] * Using Docker driver with root privileges
	I1120 21:10:50.256034  837622 cni.go:84] Creating CNI manager for ""
	I1120 21:10:50.256098  837622 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:10:50.256113  837622 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:10:50.256202  837622 start.go:353] cluster config:
	{Name:addons-828342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1120 21:10:50.259298  837622 out.go:179] * Starting "addons-828342" primary control-plane node in "addons-828342" cluster
	I1120 21:10:50.262136  837622 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:10:50.265133  837622 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:10:50.267986  837622 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:10:50.268035  837622 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:10:50.268048  837622 cache.go:65] Caching tarball of preloaded images
	I1120 21:10:50.268062  837622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:10:50.268132  837622 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:10:50.268142  837622 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:10:50.268475  837622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/config.json ...
	I1120 21:10:50.268499  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/config.json: {Name:mk3184c7dba130c932bc9e5294a677adb27e05fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:50.283287  837622 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 21:10:50.283396  837622 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 21:10:50.283421  837622 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1120 21:10:50.283433  837622 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1120 21:10:50.283441  837622 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1120 21:10:50.283447  837622 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1120 21:11:08.260687  837622 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1120 21:11:08.260733  837622 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:11:08.260763  837622 start.go:360] acquireMachinesLock for addons-828342: {Name:mk557b86f17357107ee0584eb0543209b8fb35ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:11:08.261617  837622 start.go:364] duration metric: took 825.93µs to acquireMachinesLock for "addons-828342"
	I1120 21:11:08.261664  837622 start.go:93] Provisioning new machine with config: &{Name:addons-828342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:11:08.261753  837622 start.go:125] createHost starting for "" (driver="docker")
	I1120 21:11:08.265299  837622 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1120 21:11:08.265547  837622 start.go:159] libmachine.API.Create for "addons-828342" (driver="docker")
	I1120 21:11:08.265587  837622 client.go:173] LocalClient.Create starting
	I1120 21:11:08.265726  837622 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem
	I1120 21:11:08.824195  837622 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem
	I1120 21:11:09.355611  837622 cli_runner.go:164] Run: docker network inspect addons-828342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:11:09.370473  837622 cli_runner.go:211] docker network inspect addons-828342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:11:09.370557  837622 network_create.go:284] running [docker network inspect addons-828342] to gather additional debugging logs...
	I1120 21:11:09.370574  837622 cli_runner.go:164] Run: docker network inspect addons-828342
	W1120 21:11:09.386772  837622 cli_runner.go:211] docker network inspect addons-828342 returned with exit code 1
	I1120 21:11:09.386799  837622 network_create.go:287] error running [docker network inspect addons-828342]: docker network inspect addons-828342: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-828342 not found
	I1120 21:11:09.386813  837622 network_create.go:289] output of [docker network inspect addons-828342]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-828342 not found
	
	** /stderr **
	I1120 21:11:09.386921  837622 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:11:09.403697  837622 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bfc5f0}
	I1120 21:11:09.403749  837622 network_create.go:124] attempt to create docker network addons-828342 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1120 21:11:09.403823  837622 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-828342 addons-828342
	I1120 21:11:09.462507  837622 network_create.go:108] docker network addons-828342 192.168.49.0/24 created
	I1120 21:11:09.462536  837622 kic.go:121] calculated static IP "192.168.49.2" for the "addons-828342" container
	I1120 21:11:09.462610  837622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:11:09.477741  837622 cli_runner.go:164] Run: docker volume create addons-828342 --label name.minikube.sigs.k8s.io=addons-828342 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:11:09.495502  837622 oci.go:103] Successfully created a docker volume addons-828342
	I1120 21:11:09.495610  837622 cli_runner.go:164] Run: docker run --rm --name addons-828342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-828342 --entrypoint /usr/bin/test -v addons-828342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:11:11.675805  837622 cli_runner.go:217] Completed: docker run --rm --name addons-828342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-828342 --entrypoint /usr/bin/test -v addons-828342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (2.180154992s)
	I1120 21:11:11.675835  837622 oci.go:107] Successfully prepared a docker volume addons-828342
	I1120 21:11:11.675899  837622 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:11:11.675909  837622 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:11:11.675970  837622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-828342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 21:11:16.078010  837622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-828342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.401989429s)
	I1120 21:11:16.078047  837622 kic.go:203] duration metric: took 4.402133751s to extract preloaded images to volume ...
	W1120 21:11:16.078192  837622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 21:11:16.078308  837622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:11:16.132622  837622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-828342 --name addons-828342 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-828342 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-828342 --network addons-828342 --ip 192.168.49.2 --volume addons-828342:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:11:16.408930  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Running}}
	I1120 21:11:16.433430  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:16.454484  837622 cli_runner.go:164] Run: docker exec addons-828342 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:11:16.527681  837622 oci.go:144] the created container "addons-828342" has a running status.
	I1120 21:11:16.527710  837622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa...
	I1120 21:11:16.862588  837622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:11:16.895422  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:16.924998  837622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:11:16.925017  837622 kic_runner.go:114] Args: [docker exec --privileged addons-828342 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:11:16.982497  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:17.008534  837622 machine.go:94] provisionDockerMachine start ...
	I1120 21:11:17.008648  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:17.037831  837622 main.go:143] libmachine: Using SSH client type: native
	I1120 21:11:17.038164  837622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1120 21:11:17.038173  837622 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:11:17.038965  837622 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:11:20.183042  837622 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-828342
	
	I1120 21:11:20.183075  837622 ubuntu.go:182] provisioning hostname "addons-828342"
	I1120 21:11:20.183146  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:20.202587  837622 main.go:143] libmachine: Using SSH client type: native
	I1120 21:11:20.202911  837622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1120 21:11:20.202928  837622 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-828342 && echo "addons-828342" | sudo tee /etc/hostname
	I1120 21:11:20.353146  837622 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-828342
	
	I1120 21:11:20.353247  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:20.371232  837622 main.go:143] libmachine: Using SSH client type: native
	I1120 21:11:20.371552  837622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1120 21:11:20.371573  837622 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-828342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-828342/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-828342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:11:20.515286  837622 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:11:20.515312  837622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:11:20.515330  837622 ubuntu.go:190] setting up certificates
	I1120 21:11:20.515339  837622 provision.go:84] configureAuth start
	I1120 21:11:20.515399  837622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-828342
	I1120 21:11:20.532775  837622 provision.go:143] copyHostCerts
	I1120 21:11:20.532885  837622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:11:20.533018  837622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:11:20.533081  837622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:11:20.533135  837622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.addons-828342 san=[127.0.0.1 192.168.49.2 addons-828342 localhost minikube]
	I1120 21:11:20.943141  837622 provision.go:177] copyRemoteCerts
	I1120 21:11:20.943211  837622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:11:20.943256  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:20.960125  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.063023  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:11:21.083949  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:11:21.101446  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:11:21.119171  837622 provision.go:87] duration metric: took 603.796423ms to configureAuth
	I1120 21:11:21.119196  837622 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:11:21.119414  837622 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:11:21.119525  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.136394  837622 main.go:143] libmachine: Using SSH client type: native
	I1120 21:11:21.136716  837622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1120 21:11:21.136737  837622 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:11:21.423233  837622 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:11:21.423302  837622 machine.go:97] duration metric: took 4.41474688s to provisionDockerMachine
	I1120 21:11:21.423326  837622 client.go:176] duration metric: took 13.157728421s to LocalClient.Create
	I1120 21:11:21.423379  837622 start.go:167] duration metric: took 13.157833128s to libmachine.API.Create "addons-828342"
	I1120 21:11:21.423406  837622 start.go:293] postStartSetup for "addons-828342" (driver="docker")
	I1120 21:11:21.423435  837622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:11:21.423538  837622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:11:21.423667  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.441324  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.547226  837622 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:11:21.550585  837622 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:11:21.550618  837622 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:11:21.550630  837622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:11:21.550700  837622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:11:21.550727  837622 start.go:296] duration metric: took 127.297465ms for postStartSetup
	I1120 21:11:21.551074  837622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-828342
	I1120 21:11:21.567974  837622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/config.json ...
	I1120 21:11:21.568289  837622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:11:21.568345  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.584899  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.688088  837622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:11:21.692877  837622 start.go:128] duration metric: took 13.431107515s to createHost
	I1120 21:11:21.692902  837622 start.go:83] releasing machines lock for "addons-828342", held for 13.431262027s
	I1120 21:11:21.692983  837622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-828342
	I1120 21:11:21.709594  837622 ssh_runner.go:195] Run: cat /version.json
	I1120 21:11:21.709654  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.709913  837622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:11:21.709973  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.733623  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.744453  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.830567  837622 ssh_runner.go:195] Run: systemctl --version
	I1120 21:11:21.923904  837622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:11:21.961071  837622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:11:21.965728  837622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:11:21.965825  837622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:11:21.995148  837622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 21:11:21.995230  837622 start.go:496] detecting cgroup driver to use...
	I1120 21:11:21.995271  837622 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:11:21.995338  837622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:11:22.013503  837622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:11:22.026950  837622 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:11:22.027042  837622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:11:22.046235  837622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:11:22.066313  837622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:11:22.198451  837622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:11:22.343165  837622 docker.go:234] disabling docker service ...
	I1120 21:11:22.343238  837622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:11:22.373117  837622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:11:22.386952  837622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:11:22.516711  837622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:11:22.646385  837622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:11:22.659619  837622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:11:22.675169  837622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:11:22.675266  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.684678  837622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:11:22.684756  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.694494  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.703990  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.713067  837622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:11:22.721623  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.730753  837622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.745300  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.754212  837622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:11:22.761767  837622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:11:22.769142  837622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:22.890868  837622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:11:23.082409  837622 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:11:23.082551  837622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:11:23.086525  837622 start.go:564] Will wait 60s for crictl version
	I1120 21:11:23.086593  837622 ssh_runner.go:195] Run: which crictl
	I1120 21:11:23.090051  837622 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:11:23.114234  837622 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:11:23.114344  837622 ssh_runner.go:195] Run: crio --version
	I1120 21:11:23.142228  837622 ssh_runner.go:195] Run: crio --version
	I1120 21:11:23.176513  837622 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:11:23.179225  837622 cli_runner.go:164] Run: docker network inspect addons-828342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:11:23.194361  837622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:11:23.198186  837622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:11:23.207660  837622 kubeadm.go:884] updating cluster {Name:addons-828342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:11:23.207772  837622 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:11:23.207829  837622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:11:23.239884  837622 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:11:23.239910  837622 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:11:23.239968  837622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:11:23.269224  837622 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:11:23.269250  837622 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:11:23.269258  837622 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 21:11:23.269358  837622 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-828342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:11:23.269443  837622 ssh_runner.go:195] Run: crio config
	I1120 21:11:23.333177  837622 cni.go:84] Creating CNI manager for ""
	I1120 21:11:23.333219  837622 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:11:23.333243  837622 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:11:23.333277  837622 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-828342 NodeName:addons-828342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:11:23.333408  837622 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-828342"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:11:23.333494  837622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:11:23.341470  837622 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:11:23.341568  837622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:11:23.349509  837622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:11:23.362477  837622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:11:23.377923  837622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1120 21:11:23.396608  837622 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:11:23.402531  837622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:11:23.412472  837622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:23.529843  837622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:11:23.550029  837622 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342 for IP: 192.168.49.2
	I1120 21:11:23.550054  837622 certs.go:195] generating shared ca certs ...
	I1120 21:11:23.550072  837622 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:23.550215  837622 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:11:24.108256  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt ...
	I1120 21:11:24.108289  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt: {Name:mk99b4138ffbdd521ade86fe93e2ecb16a119bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:24.109124  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key ...
	I1120 21:11:24.109142  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key: {Name:mk9989a3516add42f4cc91a43b4f457a4ffe45b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:24.109804  837622 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:11:24.659335  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt ...
	I1120 21:11:24.659369  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt: {Name:mk17edb29508da4a28dfe448254668558046171c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:24.659556  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key ...
	I1120 21:11:24.659570  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key: {Name:mk8712b76bb21a33d7d0a56aadaf09a5974dd74e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:24.659653  837622 certs.go:257] generating profile certs ...
	I1120 21:11:24.659723  837622 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.key
	I1120 21:11:24.659743  837622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt with IP's: []
	I1120 21:11:25.231610  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt ...
	I1120 21:11:25.231644  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: {Name:mk6f16491fb88000ee2dc18919f6827195283bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:25.232502  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.key ...
	I1120 21:11:25.232528  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.key: {Name:mka1352f5a0708566dd0785034fd37ac540dd680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:25.232693  837622 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key.83d65139
	I1120 21:11:25.232733  837622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt.83d65139 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1120 21:11:25.624070  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt.83d65139 ...
	I1120 21:11:25.624103  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt.83d65139: {Name:mkea90d949b0f2fd6ce61d7102d8bda7038f4e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:25.624336  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key.83d65139 ...
	I1120 21:11:25.624356  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key.83d65139: {Name:mk1ae3bfad5daff18aeebf26340ca9af94a3bb82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:25.624440  837622 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt.83d65139 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt
	I1120 21:11:25.624530  837622 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key.83d65139 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key
	I1120 21:11:25.624587  837622 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.key
	I1120 21:11:25.624607  837622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.crt with IP's: []
	I1120 21:11:26.154993  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.crt ...
	I1120 21:11:26.155025  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.crt: {Name:mke6860b26526f96f3ed5f02e152067209959fd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:26.155220  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.key ...
	I1120 21:11:26.155235  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.key: {Name:mkb1be3640fc1c7d774719dd0e365c182c9c4b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:26.156088  837622 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:11:26.156137  837622 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:11:26.156162  837622 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:11:26.156190  837622 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:11:26.156751  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:11:26.176749  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:11:26.194526  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:11:26.212154  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:11:26.230353  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 21:11:26.248408  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:11:26.265975  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:11:26.283630  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:11:26.301383  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:11:26.318726  837622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:11:26.333197  837622 ssh_runner.go:195] Run: openssl version
	I1120 21:11:26.339572  837622 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:11:26.347109  837622 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:11:26.354724  837622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:11:26.358500  837622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:11:26.358566  837622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:11:26.400044  837622 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:11:26.407484  837622 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:11:26.414900  837622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:11:26.418549  837622 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:11:26.418600  837622 kubeadm.go:401] StartCluster: {Name:addons-828342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:11:26.418670  837622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:11:26.418749  837622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:11:26.450638  837622 cri.go:89] found id: ""
	I1120 21:11:26.450709  837622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:11:26.458709  837622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:11:26.466609  837622 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:11:26.466678  837622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:11:26.474613  837622 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:11:26.474635  837622 kubeadm.go:158] found existing configuration files:
	
	I1120 21:11:26.474707  837622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:11:26.482763  837622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:11:26.482836  837622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:11:26.490396  837622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:11:26.498477  837622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:11:26.498564  837622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:11:26.506167  837622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:11:26.514140  837622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:11:26.514265  837622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:11:26.522115  837622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:11:26.529870  837622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:11:26.529934  837622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:11:26.537386  837622 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:11:26.581585  837622 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:11:26.581733  837622 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:11:26.602561  837622 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:11:26.602652  837622 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 21:11:26.602699  837622 kubeadm.go:319] OS: Linux
	I1120 21:11:26.602747  837622 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:11:26.602798  837622 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 21:11:26.602848  837622 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:11:26.602899  837622 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:11:26.602949  837622 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:11:26.603019  837622 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:11:26.603070  837622 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:11:26.603127  837622 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:11:26.603179  837622 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 21:11:26.676866  837622 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:11:26.677043  837622 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:11:26.677170  837622 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:11:26.691901  837622 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:11:26.697987  837622 out.go:252]   - Generating certificates and keys ...
	I1120 21:11:26.698173  837622 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:11:26.698301  837622 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:11:26.862024  837622 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:11:27.357948  837622 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:11:28.306939  837622 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:11:28.878469  837622 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:11:29.130588  837622 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:11:29.130742  837622 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-828342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1120 21:11:29.550693  837622 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:11:29.551042  837622 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-828342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1120 21:11:30.129271  837622 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:11:30.812324  837622 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:11:31.045202  837622 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:11:31.045498  837622 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:11:31.256100  837622 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:11:31.510179  837622 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:11:31.722489  837622 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:11:32.555753  837622 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:11:33.455597  837622 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:11:33.456613  837622 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:11:33.459666  837622 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:11:33.463285  837622 out.go:252]   - Booting up control plane ...
	I1120 21:11:33.463389  837622 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:11:33.463471  837622 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:11:33.464907  837622 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:11:33.484594  837622 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:11:33.484707  837622 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:11:33.493519  837622 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:11:33.494939  837622 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:11:33.495247  837622 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:11:33.635424  837622 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:11:33.635549  837622 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:11:34.143472  837622 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 508.694527ms
	I1120 21:11:34.146894  837622 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:11:34.147383  837622 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1120 21:11:34.148222  837622 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:11:34.148561  837622 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:11:37.582117  837622 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.433143937s
	I1120 21:11:38.351938  837622 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.202423482s
	I1120 21:11:40.149490  837622 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001605498s
	I1120 21:11:40.169549  837622 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:11:40.183206  837622 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:11:40.198626  837622 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:11:40.198840  837622 kubeadm.go:319] [mark-control-plane] Marking the node addons-828342 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:11:40.213111  837622 kubeadm.go:319] [bootstrap-token] Using token: kdkmn8.0zmhcrclk06dr83a
	I1120 21:11:40.216167  837622 out.go:252]   - Configuring RBAC rules ...
	I1120 21:11:40.216305  837622 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:11:40.226084  837622 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:11:40.235160  837622 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:11:40.239573  837622 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:11:40.244002  837622 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:11:40.248395  837622 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:11:40.556721  837622 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:11:41.016074  837622 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:11:41.558208  837622 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:11:41.559514  837622 kubeadm.go:319] 
	I1120 21:11:41.559589  837622 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:11:41.559611  837622 kubeadm.go:319] 
	I1120 21:11:41.559723  837622 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:11:41.559742  837622 kubeadm.go:319] 
	I1120 21:11:41.559774  837622 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:11:41.559862  837622 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:11:41.559952  837622 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:11:41.559958  837622 kubeadm.go:319] 
	I1120 21:11:41.560022  837622 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:11:41.560027  837622 kubeadm.go:319] 
	I1120 21:11:41.560094  837622 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:11:41.560100  837622 kubeadm.go:319] 
	I1120 21:11:41.560163  837622 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:11:41.560244  837622 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:11:41.560315  837622 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:11:41.560320  837622 kubeadm.go:319] 
	I1120 21:11:41.560427  837622 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:11:41.560541  837622 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:11:41.560563  837622 kubeadm.go:319] 
	I1120 21:11:41.560669  837622 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kdkmn8.0zmhcrclk06dr83a \
	I1120 21:11:41.560806  837622 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 \
	I1120 21:11:41.560889  837622 kubeadm.go:319] 	--control-plane 
	I1120 21:11:41.560899  837622 kubeadm.go:319] 
	I1120 21:11:41.561087  837622 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:11:41.561096  837622 kubeadm.go:319] 
	I1120 21:11:41.561191  837622 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kdkmn8.0zmhcrclk06dr83a \
	I1120 21:11:41.561308  837622 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 
	I1120 21:11:41.565203  837622 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 21:11:41.565464  837622 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 21:11:41.565592  837622 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 21:11:41.565615  837622 cni.go:84] Creating CNI manager for ""
	I1120 21:11:41.565631  837622 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:11:41.568897  837622 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:11:41.571851  837622 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:11:41.576161  837622 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:11:41.576185  837622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:11:41.589463  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:11:41.888208  837622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:11:41.888353  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:41.888434  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-828342 minikube.k8s.io/updated_at=2025_11_20T21_11_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=addons-828342 minikube.k8s.io/primary=true
	I1120 21:11:42.038367  837622 ops.go:34] apiserver oom_adj: -16
	I1120 21:11:42.038521  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:42.538895  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:43.039534  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:43.539110  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:44.039237  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:44.538657  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:45.038876  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:45.538941  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:45.624646  837622 kubeadm.go:1114] duration metric: took 3.736358353s to wait for elevateKubeSystemPrivileges
	I1120 21:11:45.624692  837622 kubeadm.go:403] duration metric: took 19.206086449s to StartCluster
	I1120 21:11:45.624710  837622 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:45.625450  837622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:11:45.625871  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:45.626086  837622 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:11:45.626223  837622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:11:45.626500  837622 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1120 21:11:45.626595  837622 addons.go:70] Setting yakd=true in profile "addons-828342"
	I1120 21:11:45.626616  837622 addons.go:239] Setting addon yakd=true in "addons-828342"
	I1120 21:11:45.626639  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.627123  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.627373  837622 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:11:45.627422  837622 addons.go:70] Setting inspektor-gadget=true in profile "addons-828342"
	I1120 21:11:45.627433  837622 addons.go:239] Setting addon inspektor-gadget=true in "addons-828342"
	I1120 21:11:45.627453  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.627834  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.628065  837622 addons.go:70] Setting metrics-server=true in profile "addons-828342"
	I1120 21:11:45.628086  837622 addons.go:239] Setting addon metrics-server=true in "addons-828342"
	I1120 21:11:45.628110  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.628525  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.628808  837622 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-828342"
	I1120 21:11:45.628873  837622 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-828342"
	I1120 21:11:45.628896  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.629316  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.631517  837622 addons.go:70] Setting cloud-spanner=true in profile "addons-828342"
	I1120 21:11:45.631545  837622 addons.go:239] Setting addon cloud-spanner=true in "addons-828342"
	I1120 21:11:45.631596  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.632078  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.632929  837622 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-828342"
	I1120 21:11:45.632962  837622 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-828342"
	I1120 21:11:45.633001  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.633420  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.636780  837622 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-828342"
	I1120 21:11:45.636858  837622 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-828342"
	I1120 21:11:45.636890  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.637345  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.639846  837622 addons.go:70] Setting registry=true in profile "addons-828342"
	I1120 21:11:45.639877  837622 addons.go:239] Setting addon registry=true in "addons-828342"
	I1120 21:11:45.639924  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.640526  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.647025  837622 addons.go:70] Setting default-storageclass=true in profile "addons-828342"
	I1120 21:11:45.647067  837622 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-828342"
	I1120 21:11:45.647389  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.655091  837622 addons.go:70] Setting registry-creds=true in profile "addons-828342"
	I1120 21:11:45.655122  837622 addons.go:239] Setting addon registry-creds=true in "addons-828342"
	I1120 21:11:45.655157  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.655635  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.660217  837622 addons.go:70] Setting gcp-auth=true in profile "addons-828342"
	I1120 21:11:45.660252  837622 mustload.go:66] Loading cluster: addons-828342
	I1120 21:11:45.660458  837622 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:11:45.660743  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.677052  837622 addons.go:70] Setting ingress=true in profile "addons-828342"
	I1120 21:11:45.677082  837622 addons.go:239] Setting addon ingress=true in "addons-828342"
	I1120 21:11:45.677226  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.677700  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.683581  837622 addons.go:70] Setting storage-provisioner=true in profile "addons-828342"
	I1120 21:11:45.716620  837622 addons.go:239] Setting addon storage-provisioner=true in "addons-828342"
	I1120 21:11:45.716718  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.717477  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.684657  837622 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-828342"
	I1120 21:11:45.742110  837622 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-828342"
	I1120 21:11:45.742828  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.746241  837622 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1120 21:11:45.746484  837622 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1120 21:11:45.763018  837622 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1120 21:11:45.763091  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1120 21:11:45.763202  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.684860  837622 addons.go:70] Setting volcano=true in profile "addons-828342"
	I1120 21:11:45.764017  837622 addons.go:239] Setting addon volcano=true in "addons-828342"
	I1120 21:11:45.764082  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.684907  837622 addons.go:70] Setting volumesnapshots=true in profile "addons-828342"
	I1120 21:11:45.764431  837622 addons.go:239] Setting addon volumesnapshots=true in "addons-828342"
	I1120 21:11:45.764479  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.764941  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.687145  837622 addons.go:70] Setting ingress-dns=true in profile "addons-828342"
	I1120 21:11:45.775957  837622 addons.go:239] Setting addon ingress-dns=true in "addons-828342"
	I1120 21:11:45.776031  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.776551  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.787148  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1120 21:11:45.787218  837622 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1120 21:11:45.787321  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.687189  837622 out.go:179] * Verifying Kubernetes components...
	I1120 21:11:45.805357  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.808254  837622 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1120 21:11:45.810466  837622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:45.827864  837622 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1120 21:11:45.830780  837622 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1120 21:11:45.830810  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1120 21:11:45.830873  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.847049  837622 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1120 21:11:45.847068  837622 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 21:11:45.850709  837622 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 21:11:45.850790  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.865586  837622 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 21:11:45.865607  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1120 21:11:45.865681  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.847073  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1120 21:11:45.888350  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1120 21:11:45.888998  837622 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1120 21:11:45.891370  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1120 21:11:45.891515  837622 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 21:11:45.891529  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1120 21:11:45.891639  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.926003  837622 addons.go:239] Setting addon default-storageclass=true in "addons-828342"
	I1120 21:11:45.926066  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.926614  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.927225  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.959592  837622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:11:45.960655  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1120 21:11:45.961793  837622 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 21:11:45.968935  837622 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1120 21:11:45.969045  837622 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1120 21:11:45.975194  837622 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-828342"
	I1120 21:11:45.975252  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.975784  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.995405  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1120 21:11:45.997808  837622 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1120 21:11:45.998173  837622 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 21:11:45.998188  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1120 21:11:45.998255  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.023494  837622 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:11:46.030443  837622 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:46.030522  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:11:46.030638  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.037952  837622 out.go:179]   - Using image docker.io/registry:3.0.0
	I1120 21:11:46.043151  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1120 21:11:46.043331  837622 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1120 21:11:46.043354  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1120 21:11:46.043424  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.061114  837622 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 21:11:46.061522  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.067757  837622 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 21:11:46.067781  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1120 21:11:46.067853  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.071162  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1120 21:11:46.074108  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1120 21:11:46.081022  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1120 21:11:46.081058  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1120 21:11:46.081140  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.094107  837622 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1120 21:11:46.096970  837622 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 21:11:46.096994  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1120 21:11:46.097062  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.104788  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.116887  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.124000  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	W1120 21:11:46.124089  837622 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1120 21:11:46.127178  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.135613  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1120 21:11:46.138586  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1120 21:11:46.138613  837622 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1120 21:11:46.138692  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.138843  837622 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1120 21:11:46.141759  837622 out.go:179]   - Using image docker.io/busybox:stable
	I1120 21:11:46.147697  837622 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 21:11:46.147731  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1120 21:11:46.147797  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.191035  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.223114  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.225100  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.250745  837622 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:46.250766  837622 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:11:46.250832  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.264440  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.264999  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.286589  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.288002  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.299241  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.307833  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	W1120 21:11:46.311464  837622 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 21:11:46.311508  837622 retry.go:31] will retry after 223.009585ms: ssh: handshake failed: EOF
	I1120 21:11:46.312725  837622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:11:46.333882  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.817370  837622 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1120 21:11:46.817396  837622 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1120 21:11:46.999049  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1120 21:11:47.008875  837622 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 21:11:47.008900  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1120 21:11:47.073124  837622 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1120 21:11:47.073151  837622 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1120 21:11:47.083135  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1120 21:11:47.121400  837622 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1120 21:11:47.121425  837622 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1120 21:11:47.146676  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 21:11:47.165061  837622 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 21:11:47.165087  837622 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 21:11:47.179014  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 21:11:47.198387  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1120 21:11:47.198413  837622 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1120 21:11:47.214671  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 21:11:47.219846  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1120 21:11:47.219871  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1120 21:11:47.262047  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 21:11:47.276469  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 21:11:47.304784  837622 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 21:11:47.304815  837622 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 21:11:47.327442  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:47.332376  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:47.338398  837622 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1120 21:11:47.338421  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1120 21:11:47.361299  837622 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1120 21:11:47.361324  837622 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1120 21:11:47.398387  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1120 21:11:47.398413  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1120 21:11:47.461930  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1120 21:11:47.461954  837622 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1120 21:11:47.516328  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 21:11:47.536784  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 21:11:47.541305  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1120 21:11:47.575058  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1120 21:11:47.575081  837622 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1120 21:11:47.674765  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1120 21:11:47.674791  837622 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1120 21:11:47.677666  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1120 21:11:47.677691  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1120 21:11:47.700335  837622 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.740705357s)
	I1120 21:11:47.700364  837622 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1120 21:11:47.701331  837622 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.388578289s)
	I1120 21:11:47.701945  837622 node_ready.go:35] waiting up to 6m0s for node "addons-828342" to be "Ready" ...
	I1120 21:11:47.774161  837622 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 21:11:47.774185  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1120 21:11:47.902719  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1120 21:11:47.902745  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1120 21:11:47.930807  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1120 21:11:47.930831  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1120 21:11:47.963580  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 21:11:48.109221  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1120 21:11:48.137479  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1120 21:11:48.137556  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1120 21:11:48.205867  837622 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-828342" context rescaled to 1 replicas
	I1120 21:11:48.392428  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1120 21:11:48.392503  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1120 21:11:48.601064  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1120 21:11:48.601142  837622 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1120 21:11:48.864932  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1120 21:11:48.864953  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1120 21:11:48.886375  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1120 21:11:48.886396  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1120 21:11:48.902377  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 21:11:48.902398  837622 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1120 21:11:48.920137  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1120 21:11:49.745093  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:50.523043  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.439869235s)
	I1120 21:11:50.523107  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.376412558s)
	I1120 21:11:50.523130  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.344098554s)
	I1120 21:11:50.523150  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.308459104s)
	I1120 21:11:50.523224  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.524092253s)
	W1120 21:11:52.220089  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:52.224973  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.962888478s)
	I1120 21:11:52.225003  837622 addons.go:480] Verifying addon ingress=true in "addons-828342"
	I1120 21:11:52.225242  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.948742376s)
	I1120 21:11:52.225299  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.897835687s)
	I1120 21:11:52.225546  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.893143232s)
	I1120 21:11:52.225608  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.709254658s)
	I1120 21:11:52.225617  837622 addons.go:480] Verifying addon metrics-server=true in "addons-828342"
	I1120 21:11:52.225660  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.688850881s)
	I1120 21:11:52.225762  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.684432841s)
	I1120 21:11:52.225770  837622 addons.go:480] Verifying addon registry=true in "addons-828342"
	I1120 21:11:52.226189  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.262572736s)
	W1120 21:11:52.226217  837622 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 21:11:52.226233  837622 retry.go:31] will retry after 225.039773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 21:11:52.226273  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.117023249s)
	I1120 21:11:52.228518  837622 out.go:179] * Verifying ingress addon...
	I1120 21:11:52.230681  837622 out.go:179] * Verifying registry addon...
	I1120 21:11:52.232657  837622 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-828342 service yakd-dashboard -n yakd-dashboard
	
	I1120 21:11:52.233509  837622 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1120 21:11:52.236542  837622 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1120 21:11:52.244165  837622 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 21:11:52.244193  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:52.248815  837622 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 21:11:52.248861  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1120 21:11:52.263333  837622 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1120 21:11:52.451562  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 21:11:52.525647  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.605423152s)
	I1120 21:11:52.525680  837622 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-828342"
	I1120 21:11:52.528007  837622 out.go:179] * Verifying csi-hostpath-driver addon...
	I1120 21:11:52.531731  837622 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1120 21:11:52.546541  837622 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 21:11:52.546562  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:52.740006  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:52.740762  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:53.035695  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:53.238822  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:53.239087  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:53.535514  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:53.634738  837622 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1120 21:11:53.634854  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:53.652131  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:53.737382  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:53.739068  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:53.761217  837622 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1120 21:11:53.775492  837622 addons.go:239] Setting addon gcp-auth=true in "addons-828342"
	I1120 21:11:53.775544  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:53.775999  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:53.793047  837622 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1120 21:11:53.793107  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:53.811634  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:54.035752  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:54.237901  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:54.239177  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:54.537038  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:11:54.705098  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:54.737809  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:54.740298  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:55.035669  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:55.158637  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.707017688s)
	I1120 21:11:55.158776  837622 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.365585263s)
	I1120 21:11:55.161611  837622 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 21:11:55.164439  837622 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1120 21:11:55.167378  837622 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1120 21:11:55.167402  837622 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1120 21:11:55.181031  837622 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1120 21:11:55.181055  837622 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1120 21:11:55.194157  837622 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 21:11:55.194190  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1120 21:11:55.209775  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 21:11:55.237647  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:55.240206  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:55.535582  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:55.721759  837622 addons.go:480] Verifying addon gcp-auth=true in "addons-828342"
	I1120 21:11:55.724880  837622 out.go:179] * Verifying gcp-auth addon...
	I1120 21:11:55.729408  837622 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1120 21:11:55.732278  837622 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1120 21:11:55.732305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:55.740599  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:55.741555  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:56.035183  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:56.232626  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:56.237394  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:56.239284  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:56.535467  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:11:56.705624  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:56.732800  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:56.736329  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:56.739571  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:57.035122  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:57.232609  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:57.239226  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:57.239796  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:57.535045  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:57.732438  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:57.736721  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:57.747375  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:58.035807  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:58.232550  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:58.236176  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:58.239701  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:58.536089  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:58.733078  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:58.736593  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:58.739166  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:59.035694  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:11:59.205282  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:59.233230  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:59.237244  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:59.239029  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:59.535541  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:59.732254  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:59.736838  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:59.739223  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:00.047865  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:00.247149  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:00.250410  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:00.253641  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:00.535573  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:00.733229  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:00.737624  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:00.739660  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:01.035563  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:01.205794  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:01.232668  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:01.236799  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:01.239380  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:01.537258  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:01.733017  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:01.738084  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:01.740280  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:02.035769  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:02.232948  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:02.236925  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:02.239640  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:02.535566  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:02.732210  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:02.737255  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:02.739613  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:03.035818  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:03.232947  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:03.236770  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:03.239291  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:03.535581  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:03.705384  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:03.732177  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:03.737220  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:03.739681  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:04.034630  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:04.232499  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:04.237330  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:04.239237  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:04.536287  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:04.733182  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:04.737084  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:04.739113  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:05.036080  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:05.233493  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:05.237230  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:05.239266  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:05.535585  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:05.705649  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:05.732546  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:05.736491  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:05.740247  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:06.034801  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:06.232630  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:06.237716  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:06.240005  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:06.536217  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:06.733285  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:06.738239  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:06.739975  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:07.036406  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:07.233027  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:07.236458  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:07.241356  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:07.535619  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:07.732063  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:07.736927  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:07.739120  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:08.035811  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:08.204728  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:08.232715  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:08.236381  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:08.239990  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:08.535564  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:08.732082  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:08.737108  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:08.739330  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:09.036283  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:09.233189  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:09.236560  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:09.240014  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:09.536134  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:09.732671  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:09.736189  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:09.739556  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:10.035225  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:10.205320  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:10.233725  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:10.236184  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:10.239773  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:10.535632  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:10.732679  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:10.737225  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:10.739254  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:11.036019  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:11.232718  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:11.236630  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:11.238845  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:11.535248  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:11.732180  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:11.737847  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:11.739990  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:12.035575  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:12.233060  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:12.236613  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:12.238868  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:12.535931  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:12.704872  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:12.732964  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:12.736357  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:12.739686  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:13.035679  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:13.232712  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:13.236592  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:13.239270  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:13.535306  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:13.732907  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:13.736587  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:13.739648  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:14.034887  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:14.233189  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:14.237249  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:14.239535  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:14.536305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:14.705240  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:14.733140  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:14.736641  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:14.739111  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:15.037305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:15.232599  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:15.237550  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:15.239453  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:15.536469  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:15.732555  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:15.736945  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:15.739113  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:16.035782  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:16.233699  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:16.236389  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:16.239757  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:16.535813  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:16.705738  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:16.732899  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:16.736449  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:16.740017  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:17.035241  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:17.233094  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:17.236811  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:17.239426  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:17.536372  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:17.732855  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:17.736461  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:17.739790  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:18.034939  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:18.232771  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:18.236891  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:18.239153  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:18.539037  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:18.732640  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:18.737342  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:18.739930  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:19.035197  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:19.205372  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:19.233174  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:19.237079  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:19.239329  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:19.536758  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:19.732364  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:19.737533  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:19.739646  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:20.035231  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:20.233877  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:20.236643  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:20.239190  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:20.535577  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:20.732794  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:20.737580  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:20.739410  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:21.034635  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:21.205548  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:21.233498  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:21.237001  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:21.242370  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:21.535345  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:21.738960  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:21.739132  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:21.740567  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:22.034580  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:22.233246  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:22.237313  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:22.239654  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:22.534730  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:22.732754  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:22.737491  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:22.739705  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:23.034741  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:23.205589  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:23.232251  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:23.236996  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:23.239620  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:23.534511  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:23.733462  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:23.738478  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:23.739710  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:24.034929  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:24.232786  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:24.236679  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:24.239235  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:24.535658  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:24.732461  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:24.736469  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:24.740106  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:25.035212  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:25.232859  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:25.237076  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:25.239390  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:25.535355  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:25.705478  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:25.733097  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:25.737696  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:25.739963  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:26.035569  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:26.232410  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:26.237564  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:26.239838  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:26.534847  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:26.732696  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:26.736778  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:26.739260  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:27.035568  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:27.233349  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:27.237471  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:27.239783  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:27.535305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:27.740837  837622 node_ready.go:49] node "addons-828342" is "Ready"
	I1120 21:12:27.740918  837622 node_ready.go:38] duration metric: took 40.038947656s for node "addons-828342" to be "Ready" ...
	I1120 21:12:27.740959  837622 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:12:27.741059  837622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:12:27.751390  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:27.757743  837622 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 21:12:27.757765  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:27.764786  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:27.765205  837622 api_server.go:72] duration metric: took 42.139086459s to wait for apiserver process to appear ...
	I1120 21:12:27.765225  837622 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:12:27.765243  837622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:12:27.804794  837622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:12:27.808448  837622 api_server.go:141] control plane version: v1.34.1
	I1120 21:12:27.808482  837622 api_server.go:131] duration metric: took 43.249439ms to wait for apiserver health ...
	I1120 21:12:27.808491  837622 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:12:27.820770  837622 system_pods.go:59] 19 kube-system pods found
	I1120 21:12:27.820813  837622 system_pods.go:61] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending
	I1120 21:12:27.820829  837622 system_pods.go:61] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending
	I1120 21:12:27.820839  837622 system_pods.go:61] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:27.820845  837622 system_pods.go:61] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending
	I1120 21:12:27.820851  837622 system_pods.go:61] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:27.820855  837622 system_pods.go:61] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:27.820864  837622 system_pods.go:61] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:27.820869  837622 system_pods.go:61] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:27.820877  837622 system_pods.go:61] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending
	I1120 21:12:27.820882  837622 system_pods.go:61] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:27.820893  837622 system_pods.go:61] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:27.820897  837622 system_pods.go:61] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending
	I1120 21:12:27.820904  837622 system_pods.go:61] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending
	I1120 21:12:27.820914  837622 system_pods.go:61] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:27.820919  837622 system_pods.go:61] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending
	I1120 21:12:27.820924  837622 system_pods.go:61] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending
	I1120 21:12:27.820932  837622 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:27.820942  837622 system_pods.go:61] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending
	I1120 21:12:27.820946  837622 system_pods.go:61] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending
	I1120 21:12:27.820951  837622 system_pods.go:74] duration metric: took 12.454839ms to wait for pod list to return data ...
	I1120 21:12:27.820959  837622 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:12:27.832167  837622 default_sa.go:45] found service account: "default"
	I1120 21:12:27.832196  837622 default_sa.go:55] duration metric: took 11.229405ms for default service account to be created ...
	I1120 21:12:27.832205  837622 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:12:27.852035  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:27.852074  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:12:27.852083  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending
	I1120 21:12:27.852091  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:27.852096  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending
	I1120 21:12:27.852101  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:27.852106  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:27.852110  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:27.852115  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:27.852124  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending
	I1120 21:12:27.852128  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:27.852135  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:27.852140  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending
	I1120 21:12:27.852144  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending
	I1120 21:12:27.852149  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:27.852158  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending
	I1120 21:12:27.852164  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending
	I1120 21:12:27.852171  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:27.852182  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:27.852187  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending
	I1120 21:12:27.852200  837622 retry.go:31] will retry after 245.675374ms: missing components: kube-dns
	I1120 21:12:28.042008  837622 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 21:12:28.042034  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:28.129077  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:28.129115  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:12:28.129123  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending
	I1120 21:12:28.129130  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:28.129134  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending
	I1120 21:12:28.129140  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:28.129145  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:28.129151  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:28.129159  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:28.129165  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 21:12:28.129170  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:28.129177  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:28.129183  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 21:12:28.129194  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending
	I1120 21:12:28.129201  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:28.129205  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending
	I1120 21:12:28.129222  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending
	I1120 21:12:28.129228  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.129235  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.129241  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:12:28.129258  837622 retry.go:31] will retry after 344.736357ms: missing components: kube-dns
	I1120 21:12:28.236398  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:28.243155  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:28.249209  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:28.484942  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:28.484981  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:12:28.484993  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 21:12:28.485009  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:28.485017  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 21:12:28.485027  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:28.485033  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:28.485042  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:28.485047  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:28.485053  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 21:12:28.485064  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:28.485069  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:28.485076  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 21:12:28.485087  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 21:12:28.485094  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:28.485109  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 21:12:28.485118  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 21:12:28.485124  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.485133  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.485141  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:12:28.485161  837622 retry.go:31] will retry after 429.263194ms: missing components: kube-dns
	I1120 21:12:28.582597  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:28.732770  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:28.736810  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:28.740411  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:28.924574  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:28.924612  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:12:28.924622  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 21:12:28.924629  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:28.924635  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 21:12:28.924640  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:28.924647  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:28.924652  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:28.924657  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:28.924663  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 21:12:28.924667  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:28.924671  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:28.924678  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 21:12:28.924684  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 21:12:28.924690  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:28.924706  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 21:12:28.924717  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 21:12:28.924724  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.924731  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.924743  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:12:28.924758  837622 retry.go:31] will retry after 390.95466ms: missing components: kube-dns
	I1120 21:12:29.035782  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:29.245140  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:29.245814  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:29.248207  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:29.323099  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:29.323132  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Running
	I1120 21:12:29.323142  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 21:12:29.323149  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:29.323156  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 21:12:29.323161  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:29.323165  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:29.323171  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:29.323178  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:29.323185  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 21:12:29.323207  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:29.323218  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:29.323224  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 21:12:29.323231  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 21:12:29.323241  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:29.323249  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 21:12:29.323255  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 21:12:29.323261  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:29.323269  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:29.323273  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Running
	I1120 21:12:29.323284  837622 system_pods.go:126] duration metric: took 1.491071478s to wait for k8s-apps to be running ...
	I1120 21:12:29.323295  837622 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:12:29.323355  837622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:12:29.341968  837622 system_svc.go:56] duration metric: took 18.663055ms WaitForService to wait for kubelet
	I1120 21:12:29.341997  837622 kubeadm.go:587] duration metric: took 43.715881455s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:12:29.342016  837622 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:12:29.346088  837622 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:12:29.346125  837622 node_conditions.go:123] node cpu capacity is 2
	I1120 21:12:29.346139  837622 node_conditions.go:105] duration metric: took 4.117031ms to run NodePressure ...
	I1120 21:12:29.346151  837622 start.go:242] waiting for startup goroutines ...
	I1120 21:12:29.537319  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:29.732336  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:29.737421  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:29.740464  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:30.046320  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:30.233170  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:30.237134  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:30.239668  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:30.536438  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:30.733901  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:30.737078  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:30.739119  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:31.035608  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:31.232752  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:31.236456  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:31.240089  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:31.537283  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:31.735821  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:31.747197  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:31.834516  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:32.035721  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:32.233542  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:32.237487  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:32.241621  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:32.536003  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:32.733900  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:32.737222  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:32.739738  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:33.035326  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:33.233368  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:33.238424  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:33.240049  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:33.537539  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:33.732693  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:33.737561  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:33.739545  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:34.036175  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:34.233326  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:34.237018  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:34.239561  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:34.535241  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:34.732657  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:34.736795  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:34.740480  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:35.036433  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:35.233419  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:35.238511  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:35.240296  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:35.536557  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:35.732895  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:35.736905  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:35.739669  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:36.035923  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:36.233136  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:36.238362  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:36.240607  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:36.535752  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:36.733289  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:36.737919  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:36.739906  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:37.035671  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:37.232685  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:37.236753  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:37.239294  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:37.536108  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:37.736924  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:37.739680  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:37.744399  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:38.035930  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:38.233382  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:38.238054  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:38.244291  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:38.538792  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:38.733302  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:38.737560  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:38.739495  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:39.034926  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:39.233247  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:39.237165  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:39.239505  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:39.537182  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:39.733786  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:39.736396  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:39.739990  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:40.046430  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:40.233295  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:40.237345  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:40.240190  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:40.535668  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:40.733123  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:40.737942  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:40.740247  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:41.035710  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:41.232454  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:41.238251  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:41.239235  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:41.537045  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:41.733375  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:41.739135  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:41.740485  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:42.036093  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:42.233833  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:42.238765  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:42.241162  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:42.537878  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:42.733743  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:42.737793  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:42.739497  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:43.035239  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:43.233351  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:43.237101  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:43.239106  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:43.544807  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:43.733244  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:43.738635  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:43.740808  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:44.036568  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:44.232835  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:44.236896  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:44.240229  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:44.537308  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:44.737158  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:44.744747  837622 kapi.go:107] duration metric: took 52.508202833s to wait for kubernetes.io/minikube-addons=registry ...
	I1120 21:12:44.745294  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:45.046351  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:45.239434  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:45.239930  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:45.536176  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:45.732297  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:45.738288  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:46.036422  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:46.232794  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:46.236680  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:46.535485  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:46.733729  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:46.736724  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:47.038654  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:47.233393  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:47.237718  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:47.541712  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:47.733382  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:47.737711  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:48.035899  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:48.233114  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:48.236951  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:48.535278  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:48.733515  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:48.737328  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:49.036153  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:49.233247  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:49.238101  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:49.535945  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:49.733566  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:49.736988  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:50.037294  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:50.241987  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:50.242324  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:50.535854  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:50.734925  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:50.746380  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:51.036852  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:51.233459  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:51.237535  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:51.536198  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:51.733211  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:51.742637  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:52.036362  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:52.233631  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:52.237216  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:52.537152  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:52.732544  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:52.753418  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:53.037021  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:53.233520  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:53.237883  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:53.535128  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:53.733232  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:53.738108  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:54.036639  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:54.233317  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:54.237382  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:54.537398  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:54.733825  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:54.738098  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:55.038273  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:55.237177  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:55.239130  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:55.538087  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:55.736625  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:55.738544  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:56.036580  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:56.232957  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:56.237361  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:56.541300  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:56.733953  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:56.736785  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:57.036078  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:57.233128  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:57.236914  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:57.535592  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:57.733407  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:57.738235  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:58.036037  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:58.233954  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:58.236735  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:58.535736  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:58.735296  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:58.736996  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:59.035248  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:59.233103  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:59.236993  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:59.536234  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:59.733903  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:59.737457  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:00.084586  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:00.239689  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:00.239824  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:00.536441  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:00.733148  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:00.737466  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:01.035820  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:01.233452  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:01.238473  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:01.536216  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:01.733589  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:01.736546  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:02.036663  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:02.233116  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:02.237842  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:02.535915  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:02.732135  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:02.736953  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:03.035954  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:03.232930  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:03.236858  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:03.536194  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:03.733392  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:03.737262  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:04.035944  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:04.233285  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:04.237470  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:04.536807  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:04.734228  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:04.736721  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:05.036292  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:05.232676  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:05.236343  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:05.536240  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:05.733106  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:05.736752  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:06.036120  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:06.232569  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:06.237531  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:06.536429  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:06.732407  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:06.737628  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:07.035427  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:07.232293  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:07.239843  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:07.535400  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:07.733189  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:07.736807  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:08.035418  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:08.232431  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:08.237366  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:08.535932  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:08.733073  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:08.737219  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:09.035989  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:09.233173  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:09.237411  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:09.535851  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:09.732794  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:09.737304  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:10.038932  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:10.234766  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:10.238143  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:10.547209  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:10.739389  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:10.739860  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:11.036525  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:11.234181  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:11.236534  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:11.539611  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:11.732778  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:11.736540  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:12.036360  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:12.232276  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:12.237443  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:12.534870  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:12.737628  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:12.739449  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:13.036858  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:13.233113  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:13.237011  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:13.536206  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:13.732591  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:13.737039  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:14.039251  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:14.232684  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:14.237021  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:14.535474  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:14.733611  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:14.737325  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:15.038896  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:15.233357  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:15.237174  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:15.548555  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:15.743700  837622 kapi.go:107] duration metric: took 1m23.510188589s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1120 21:13:15.743838  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:16.036937  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:16.233369  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:16.536294  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:16.732586  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:17.100889  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:17.233095  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:17.535704  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:17.733043  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:18.037599  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:18.233052  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:18.536420  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:18.732715  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:19.036913  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:19.234855  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:19.560113  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:19.747119  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:20.036416  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:20.233583  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:20.540993  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:20.735951  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:21.037567  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:21.232393  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:21.535523  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:21.733112  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:22.036125  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:22.234377  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:22.536280  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:22.732796  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:23.042015  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:23.233160  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:23.536248  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:23.733218  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:24.045653  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:24.233532  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:24.536771  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:24.733088  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:25.038354  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:25.233023  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:25.536040  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:25.741667  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:26.035653  837622 kapi.go:107] duration metric: took 1m33.503922107s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1120 21:13:26.232820  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:26.732695  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:27.233058  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:27.732789  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:28.233183  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:28.732305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:29.232661  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:29.733241  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:30.233629  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:30.733232  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:31.232719  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:31.733060  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:32.232745  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:32.733790  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:33.233909  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:33.733593  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:34.232764  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:34.741280  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:35.233616  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:35.733478  837622 kapi.go:107] duration metric: took 1m40.004071116s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1120 21:13:35.734600  837622 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-828342 cluster.
	I1120 21:13:35.735701  837622 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1120 21:13:35.736834  837622 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1120 21:13:35.738046  837622 out.go:179] * Enabled addons: cloud-spanner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1120 21:13:35.740070  837622 addons.go:515] duration metric: took 1m50.113555557s for enable addons: enabled=[cloud-spanner registry-creds nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget ingress-dns storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1120 21:13:35.740112  837622 start.go:247] waiting for cluster config update ...
	I1120 21:13:35.740133  837622 start.go:256] writing updated cluster config ...
	I1120 21:13:35.740413  837622 ssh_runner.go:195] Run: rm -f paused
	I1120 21:13:35.745864  837622 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:13:35.766366  837622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k2xjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.774029  837622 pod_ready.go:94] pod "coredns-66bc5c9577-k2xjd" is "Ready"
	I1120 21:13:35.774112  837622 pod_ready.go:86] duration metric: took 7.721194ms for pod "coredns-66bc5c9577-k2xjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.861930  837622 pod_ready.go:83] waiting for pod "etcd-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.868429  837622 pod_ready.go:94] pod "etcd-addons-828342" is "Ready"
	I1120 21:13:35.868454  837622 pod_ready.go:86] duration metric: took 6.496629ms for pod "etcd-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.870912  837622 pod_ready.go:83] waiting for pod "kube-apiserver-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.877730  837622 pod_ready.go:94] pod "kube-apiserver-addons-828342" is "Ready"
	I1120 21:13:35.877756  837622 pod_ready.go:86] duration metric: took 6.815164ms for pod "kube-apiserver-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.884045  837622 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:36.150385  837622 pod_ready.go:94] pod "kube-controller-manager-addons-828342" is "Ready"
	I1120 21:13:36.150411  837622 pod_ready.go:86] duration metric: took 266.339003ms for pod "kube-controller-manager-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:36.350021  837622 pod_ready.go:83] waiting for pod "kube-proxy-7p2c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:36.750235  837622 pod_ready.go:94] pod "kube-proxy-7p2c4" is "Ready"
	I1120 21:13:36.750265  837622 pod_ready.go:86] duration metric: took 400.215258ms for pod "kube-proxy-7p2c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:36.950410  837622 pod_ready.go:83] waiting for pod "kube-scheduler-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:37.349905  837622 pod_ready.go:94] pod "kube-scheduler-addons-828342" is "Ready"
	I1120 21:13:37.349940  837622 pod_ready.go:86] duration metric: took 399.504603ms for pod "kube-scheduler-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:37.349953  837622 pod_ready.go:40] duration metric: took 1.604057387s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:13:37.410721  837622 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 21:13:37.412323  837622 out.go:179] * Done! kubectl is now configured to use "addons-828342" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 21:16:54 addons-828342 crio[826]: time="2025-11-20T21:16:54.579744414Z" level=info msg="Removed container ea3702a650be11854f3a4f6a4dd40aa6fb8b16764f5ccc6ae9843b83fe8d0ee2: kube-system/registry-creds-764b6fb674-6zgsm/registry-creds" id=161496f9-55d3-4552-a141-b6cc23737683 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.865330022Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-72ktx/POD" id=712d9cb3-666f-4c95-a793-daf37bf213b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.865395762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.923765248Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-72ktx Namespace:default ID:2f8ef929031042b430e93db349bea7e909561be25aa0290da8b3bafdeba44c99 UID:1ffd3f47-9f81-4d26-8043-13e307ceb54f NetNS:/var/run/netns/f5ffb755-d75d-47b7-8e50-a91734a77f8d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002750040}] Aliases:map[]}"
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.923945895Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-72ktx to CNI network \"kindnet\" (type=ptp)"
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.939916831Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-72ktx Namespace:default ID:2f8ef929031042b430e93db349bea7e909561be25aa0290da8b3bafdeba44c99 UID:1ffd3f47-9f81-4d26-8043-13e307ceb54f NetNS:/var/run/netns/f5ffb755-d75d-47b7-8e50-a91734a77f8d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002750040}] Aliases:map[]}"
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.94009431Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-72ktx for CNI network kindnet (type=ptp)"
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.94457443Z" level=info msg="Ran pod sandbox 2f8ef929031042b430e93db349bea7e909561be25aa0290da8b3bafdeba44c99 with infra container: default/hello-world-app-5d498dc89-72ktx/POD" id=712d9cb3-666f-4c95-a793-daf37bf213b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.951574672Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=30a28a4c-6be3-459e-8a6d-64df7a06ee11 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.95179152Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=30a28a4c-6be3-459e-8a6d-64df7a06ee11 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.951844099Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=30a28a4c-6be3-459e-8a6d-64df7a06ee11 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.953028171Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=711a0b62-37d0-435c-a21b-2fec20dd60c5 name=/runtime.v1.ImageService/PullImage
	Nov 20 21:17:02 addons-828342 crio[826]: time="2025-11-20T21:17:02.957007797Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.575532368Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=711a0b62-37d0-435c-a21b-2fec20dd60c5 name=/runtime.v1.ImageService/PullImage
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.576376055Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=036c9cdb-1309-42b6-927f-4cc33e63bed9 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.581441019Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ddac2f23-acbc-4334-9980-86a7d7315c70 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.59296249Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-72ktx/hello-world-app" id=c6d71b56-cb1b-4cc4-8a66-2a27e7eeb4c6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.593227732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.623102124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.623323034Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/15a5e1221e562aa7ef4acc73cfefe1f3774a136bbf81b49bbd72701f931a241e/merged/etc/passwd: no such file or directory"
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.623356117Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/15a5e1221e562aa7ef4acc73cfefe1f3774a136bbf81b49bbd72701f931a241e/merged/etc/group: no such file or directory"
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.623637647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.646187519Z" level=info msg="Created container 51a6892259aadcf5a3a5b36594d9fd8d45cdd91c59274cbc83cafb047b06d12a: default/hello-world-app-5d498dc89-72ktx/hello-world-app" id=c6d71b56-cb1b-4cc4-8a66-2a27e7eeb4c6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.647466673Z" level=info msg="Starting container: 51a6892259aadcf5a3a5b36594d9fd8d45cdd91c59274cbc83cafb047b06d12a" id=d850e7b9-2cdd-4ed2-84d6-9c15777fc8c4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:17:03 addons-828342 crio[826]: time="2025-11-20T21:17:03.64952868Z" level=info msg="Started container" PID=7239 containerID=51a6892259aadcf5a3a5b36594d9fd8d45cdd91c59274cbc83cafb047b06d12a description=default/hello-world-app-5d498dc89-72ktx/hello-world-app id=d850e7b9-2cdd-4ed2-84d6-9c15777fc8c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f8ef929031042b430e93db349bea7e909561be25aa0290da8b3bafdeba44c99
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	51a6892259aad       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   2f8ef92903104       hello-world-app-5d498dc89-72ktx            default
	4c1c5ac17fded       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             10 seconds ago           Exited              registry-creds                           2                   fca8b090ae3d0       registry-creds-764b6fb674-6zgsm            kube-system
	09b93272fce03       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   e740e4fa1c222       nginx                                      default
	b0a1af515e664       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   a9763d198a84a       busybox                                    default
	f89684ba3974d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   80baf843833cb       gcp-auth-78565c9fb4-xchxl                  gcp-auth
	048a91057c75b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   9f654e327c852       csi-hostpathplugin-l4wrc                   kube-system
	e1b29a88eeca4       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   9f654e327c852       csi-hostpathplugin-l4wrc                   kube-system
	4cf3d3324d8e7       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   9f654e327c852       csi-hostpathplugin-l4wrc                   kube-system
	95aebe3ee5042       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   9f654e327c852       csi-hostpathplugin-l4wrc                   kube-system
	22af9833d5a05       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   ffc3ec4e6c29a       gadget-rkcm9                               gadget
	327a93daa8d9b       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   998d839f19021       ingress-nginx-controller-6c8bf45fb-7xbwt   ingress-nginx
	e0b907ada2744       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   9f654e327c852       csi-hostpathplugin-l4wrc                   kube-system
	587113023c460       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   fcaad6f163d0d       yakd-dashboard-5ff678cb9-788wg             yakd-dashboard
	d877d3a1d3b44       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     4 minutes ago            Running             nvidia-device-plugin-ctr                 0                   9419e4e2c7682       nvidia-device-plugin-daemonset-sh7sx       kube-system
	30158179e15c3       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   ddef337b0bbed       snapshot-controller-7d9fbc56b8-plxlw       kube-system
	070e65f471ee1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   bdc2d78ba839e       local-path-provisioner-648f6765c9-zsvx2    local-path-storage
	a93f40eb30f48       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   839ef8206254a       csi-hostpath-attacher-0                    kube-system
	c5c88ac4e46db       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   0f216ced2014f       csi-hostpath-resizer-0                     kube-system
	f5429fe8d6eae       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   94208d8aa2550       metrics-server-85b7d694d7-hwvxs            kube-system
	12065726cc690       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   8e5a5c82ae9c4       kube-ingress-dns-minikube                  kube-system
	1c5f45287ca2f       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             4 minutes ago            Exited              patch                                    2                   0c35fb6c28126       ingress-nginx-admission-patch-n279x        ingress-nginx
	1c684f5b792d7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              4 minutes ago            Running             registry-proxy                           0                   89e529b563aad       registry-proxy-k8tlb                       kube-system
	284630d028c28       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   9f654e327c852       csi-hostpathplugin-l4wrc                   kube-system
	27420be397785       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   4 minutes ago            Exited              create                                   0                   e1a6307be0e94       ingress-nginx-admission-create-jxltn       ingress-nginx
	cbe1df1a85fe8       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   81b056c07e3d4       cloud-spanner-emulator-6f9fcf858b-2p6j9    default
	58a00a031d21a       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   01a887d41e1f7       registry-6b586f9694-5shs6                  kube-system
	a5870aba6804f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   fc24cb66dc37d       snapshot-controller-7d9fbc56b8-4sk4t       kube-system
	4dfccd2918ac5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   fd25e873c5cde       coredns-66bc5c9577-k2xjd                   kube-system
	c82f61a3038fc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   eef098cdf4227       storage-provisioner                        kube-system
	20980cdb4eaaa       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   118f0190d9296       kube-proxy-7p2c4                           kube-system
	6896f41cbd9c3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   463b35f90d92e       kindnet-mb5xh                              kube-system
	159ee609cc9eb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   01a3a920af154       kube-controller-manager-addons-828342      kube-system
	5e20cd420abae       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   9b93659d335bb       kube-apiserver-addons-828342               kube-system
	1f333dfa546bf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   4b80006fc77d4       etcd-addons-828342                         kube-system
	303e566caaff9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   a6835c6176352       kube-scheduler-addons-828342               kube-system
	
	
	==> coredns [4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396] <==
	[INFO] 10.244.0.8:39998 - 25459 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.007050638s
	[INFO] 10.244.0.8:39998 - 46314 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000287134s
	[INFO] 10.244.0.8:39998 - 7518 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000321932s
	[INFO] 10.244.0.8:44384 - 11382 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148244s
	[INFO] 10.244.0.8:44384 - 11644 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000297874s
	[INFO] 10.244.0.8:57166 - 2988 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112354s
	[INFO] 10.244.0.8:57166 - 2759 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101827s
	[INFO] 10.244.0.8:48136 - 28771 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100776s
	[INFO] 10.244.0.8:48136 - 28599 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105125s
	[INFO] 10.244.0.8:45202 - 45913 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006390904s
	[INFO] 10.244.0.8:45202 - 46085 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006993825s
	[INFO] 10.244.0.8:60718 - 1787 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145421s
	[INFO] 10.244.0.8:60718 - 1375 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019512s
	[INFO] 10.244.0.21:40334 - 18585 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000198468s
	[INFO] 10.244.0.21:52255 - 59226 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000085975s
	[INFO] 10.244.0.21:53134 - 60261 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000150131s
	[INFO] 10.244.0.21:41377 - 31984 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081109s
	[INFO] 10.244.0.21:60288 - 42355 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102787s
	[INFO] 10.244.0.21:33125 - 21001 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101811s
	[INFO] 10.244.0.21:41499 - 36129 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001549426s
	[INFO] 10.244.0.21:55809 - 31914 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002183608s
	[INFO] 10.244.0.21:57791 - 57729 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000852778s
	[INFO] 10.244.0.21:46837 - 17090 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002060414s
	[INFO] 10.244.0.23:44196 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000234998s
	[INFO] 10.244.0.23:53941 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165418s
	
	
	==> describe nodes <==
	Name:               addons-828342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-828342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=addons-828342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_11_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-828342
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-828342"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:11:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-828342
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:16:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:16:47 +0000   Thu, 20 Nov 2025 21:11:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:16:47 +0000   Thu, 20 Nov 2025 21:11:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:16:47 +0000   Thu, 20 Nov 2025 21:11:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:16:47 +0000   Thu, 20 Nov 2025 21:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-828342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                08ccc810-1ae7-451c-8f54-003da7828560
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  default                     cloud-spanner-emulator-6f9fcf858b-2p6j9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  default                     hello-world-app-5d498dc89-72ktx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-rkcm9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  gcp-auth                    gcp-auth-78565c9fb4-xchxl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-7xbwt    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m12s
	  kube-system                 coredns-66bc5c9577-k2xjd                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m18s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 csi-hostpathplugin-l4wrc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 etcd-addons-828342                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m24s
	  kube-system                 kindnet-mb5xh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m18s
	  kube-system                 kube-apiserver-addons-828342                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-addons-828342       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-proxy-7p2c4                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-addons-828342                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 metrics-server-85b7d694d7-hwvxs             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m14s
	  kube-system                 nvidia-device-plugin-daemonset-sh7sx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 registry-6b586f9694-5shs6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 registry-creds-764b6fb674-6zgsm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 registry-proxy-k8tlb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-4sk4t        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 snapshot-controller-7d9fbc56b8-plxlw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  local-path-storage          local-path-provisioner-648f6765c9-zsvx2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-788wg              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m17s                  kube-proxy       
	  Normal   Starting                 5m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node addons-828342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node addons-828342 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m30s (x8 over 5m30s)  kubelet          Node addons-828342 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m23s                  kubelet          Node addons-828342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m23s                  kubelet          Node addons-828342 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m23s                  kubelet          Node addons-828342 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m19s                  node-controller  Node addons-828342 event: Registered Node addons-828342 in Controller
	  Normal   NodeReady                4m37s                  kubelet          Node addons-828342 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 19:42] overlayfs: idmapped layers are currently not supported
	[Nov20 19:43] overlayfs: idmapped layers are currently not supported
	[Nov20 19:44] overlayfs: idmapped layers are currently not supported
	[ +10.941558] overlayfs: idmapped layers are currently not supported
	[Nov20 19:45] overlayfs: idmapped layers are currently not supported
	[ +39.954456] overlayfs: idmapped layers are currently not supported
	[Nov20 19:46] overlayfs: idmapped layers are currently not supported
	[Nov20 19:48] overlayfs: idmapped layers are currently not supported
	[ +15.306261] overlayfs: idmapped layers are currently not supported
	[Nov20 19:49] overlayfs: idmapped layers are currently not supported
	[Nov20 19:50] overlayfs: idmapped layers are currently not supported
	[Nov20 19:51] overlayfs: idmapped layers are currently not supported
	[ +26.087379] overlayfs: idmapped layers are currently not supported
	[Nov20 19:52] overlayfs: idmapped layers are currently not supported
	[Nov20 19:53] overlayfs: idmapped layers are currently not supported
	[  +2.035111] overlayfs: idmapped layers are currently not supported
	[Nov20 19:54] overlayfs: idmapped layers are currently not supported
	[Nov20 19:55] overlayfs: idmapped layers are currently not supported
	[Nov20 19:56] overlayfs: idmapped layers are currently not supported
	[Nov20 19:57] overlayfs: idmapped layers are currently not supported
	[Nov20 19:58] overlayfs: idmapped layers are currently not supported
	[Nov20 19:59] overlayfs: idmapped layers are currently not supported
	[Nov20 20:04] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:08] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190] <==
	{"level":"warn","ts":"2025-11-20T21:11:36.825494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.842649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.859863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.879083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.900849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.912102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.936368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.954583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.968160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.987986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.007936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.023563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.044458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.063560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.081292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.119916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.181618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.207597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.327877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:52.767200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:52.776215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:12:15.393794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:12:15.399777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:12:15.419394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:12:15.439458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44152","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f89684ba3974d676a7ff46109b4785b9ba18555ea18bcbbed271715c2ca6d641] <==
	2025/11/20 21:13:35 GCP Auth Webhook started!
	2025/11/20 21:13:38 Ready to marshal response ...
	2025/11/20 21:13:38 Ready to write response ...
	2025/11/20 21:13:38 Ready to marshal response ...
	2025/11/20 21:13:38 Ready to write response ...
	2025/11/20 21:13:38 Ready to marshal response ...
	2025/11/20 21:13:38 Ready to write response ...
	2025/11/20 21:14:00 Ready to marshal response ...
	2025/11/20 21:14:00 Ready to write response ...
	2025/11/20 21:14:06 Ready to marshal response ...
	2025/11/20 21:14:06 Ready to write response ...
	2025/11/20 21:14:12 Ready to marshal response ...
	2025/11/20 21:14:12 Ready to write response ...
	2025/11/20 21:14:12 Ready to marshal response ...
	2025/11/20 21:14:12 Ready to write response ...
	2025/11/20 21:14:19 Ready to marshal response ...
	2025/11/20 21:14:19 Ready to write response ...
	2025/11/20 21:14:32 Ready to marshal response ...
	2025/11/20 21:14:32 Ready to write response ...
	2025/11/20 21:14:41 Ready to marshal response ...
	2025/11/20 21:14:41 Ready to write response ...
	2025/11/20 21:17:02 Ready to marshal response ...
	2025/11/20 21:17:02 Ready to write response ...
	
	
	==> kernel <==
	 21:17:04 up  3:59,  0 user,  load average: 0.36, 1.56, 2.75
	Linux addons-828342 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c] <==
	I1120 21:14:57.307235       1 main.go:301] handling current node
	I1120 21:15:07.307107       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:15:07.307140       1 main.go:301] handling current node
	I1120 21:15:17.305994       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:15:17.306029       1 main.go:301] handling current node
	I1120 21:15:27.307106       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:15:27.307218       1 main.go:301] handling current node
	I1120 21:15:37.311144       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:15:37.311178       1 main.go:301] handling current node
	I1120 21:15:47.314177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:15:47.314282       1 main.go:301] handling current node
	I1120 21:15:57.311510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:15:57.311546       1 main.go:301] handling current node
	I1120 21:16:07.311911       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:16:07.311945       1 main.go:301] handling current node
	I1120 21:16:17.311617       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:16:17.311731       1 main.go:301] handling current node
	I1120 21:16:27.307703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:16:27.307739       1 main.go:301] handling current node
	I1120 21:16:37.311165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:16:37.311200       1 main.go:301] handling current node
	I1120 21:16:47.305574       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:16:47.305678       1 main.go:301] handling current node
	I1120 21:16:57.307067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:16:57.307178       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa] <==
	E1120 21:13:05.505730       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.42.180:443: connect: connection refused" logger="UnhandledError"
	E1120 21:13:05.511250       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.42.180:443: connect: connection refused" logger="UnhandledError"
	W1120 21:13:06.506135       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 21:13:06.506180       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1120 21:13:06.506194       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1120 21:13:06.506241       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 21:13:06.506273       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 21:13:06.507383       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1120 21:13:10.525424       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1120 21:13:10.525869       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 21:13:10.525909       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 21:13:10.654477       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1120 21:13:10.685769       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1120 21:13:47.704333       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51228: use of closed network connection
	E1120 21:13:47.944384       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51258: use of closed network connection
	I1120 21:14:16.207999       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1120 21:14:41.525970       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1120 21:14:41.961026       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.145.46"}
	I1120 21:17:02.759987       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.156.233"}
	
	
	==> kube-controller-manager [159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88] <==
	I1120 21:11:45.360744       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 21:11:45.360876       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 21:11:45.364119       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:11:45.364178       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:11:45.364386       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:11:45.364527       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:11:45.368464       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:45.372662       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:11:45.372903       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 21:11:45.390966       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 21:11:45.391078       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:45.404604       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:11:45.404632       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:11:45.404660       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1120 21:11:50.959974       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1120 21:12:15.368900       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 21:12:15.372884       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 21:12:15.373062       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1120 21:12:15.373122       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1120 21:12:15.375135       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1120 21:12:15.474152       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:12:15.476380       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:12:30.310953       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1120 21:12:45.483849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 21:12:45.492300       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07] <==
	I1120 21:11:47.133676       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:11:47.224847       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:11:47.325598       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:11:47.325628       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 21:11:47.325690       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:11:47.355260       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:11:47.355314       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:11:47.365119       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:11:47.365449       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:11:47.365466       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:11:47.368449       1 config.go:200] "Starting service config controller"
	I1120 21:11:47.368463       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:11:47.368480       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:11:47.368484       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:11:47.368495       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:11:47.368499       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:11:47.369165       1 config.go:309] "Starting node config controller"
	I1120 21:11:47.369173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:11:47.369179       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:11:47.468619       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:11:47.468653       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:11:47.468684       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3] <==
	E1120 21:11:38.349432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:11:38.349603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:11:38.349678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:11:38.349746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:11:38.349801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:11:38.349845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:11:38.349891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:11:38.359440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 21:11:38.362916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:11:38.363018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:11:38.363071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:11:38.363195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:11:38.363247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:11:38.363717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:11:39.214600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:11:39.332551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:11:39.371981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:11:39.488544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:11:39.488688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:11:39.508255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:11:39.566600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 21:11:39.575111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:11:39.596665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:11:39.612618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1120 21:11:42.728121       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:15:20 addons-828342 kubelet[1277]: I1120 21:15:20.955198    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-sh7sx" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:15:27 addons-828342 kubelet[1277]: I1120 21:15:27.954459    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-k8tlb" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:14 addons-828342 kubelet[1277]: I1120 21:16:14.954466    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-5shs6" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:35 addons-828342 kubelet[1277]: I1120 21:16:35.953741    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-sh7sx" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:38 addons-828342 kubelet[1277]: I1120 21:16:38.955280    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6zgsm" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:40 addons-828342 kubelet[1277]: I1120 21:16:40.500561    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6zgsm" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:40 addons-828342 kubelet[1277]: I1120 21:16:40.500624    1277 scope.go:117] "RemoveContainer" containerID="8071fe40f5833a8f20a65320aa098d1d98ac52596b5d61447443ae8e82f4d26a"
	Nov 20 21:16:41 addons-828342 kubelet[1277]: I1120 21:16:41.223416    1277 scope.go:117] "RemoveContainer" containerID="8071fe40f5833a8f20a65320aa098d1d98ac52596b5d61447443ae8e82f4d26a"
	Nov 20 21:16:41 addons-828342 kubelet[1277]: I1120 21:16:41.507186    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6zgsm" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:41 addons-828342 kubelet[1277]: I1120 21:16:41.507730    1277 scope.go:117] "RemoveContainer" containerID="ea3702a650be11854f3a4f6a4dd40aa6fb8b16764f5ccc6ae9843b83fe8d0ee2"
	Nov 20 21:16:41 addons-828342 kubelet[1277]: E1120 21:16:41.507998    1277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6zgsm_kube-system(9b28e075-2521-408c-86c7-38c6b7b056b0)\"" pod="kube-system/registry-creds-764b6fb674-6zgsm" podUID="9b28e075-2521-408c-86c7-38c6b7b056b0"
	Nov 20 21:16:42 addons-828342 kubelet[1277]: I1120 21:16:42.510701    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6zgsm" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:42 addons-828342 kubelet[1277]: I1120 21:16:42.510785    1277 scope.go:117] "RemoveContainer" containerID="ea3702a650be11854f3a4f6a4dd40aa6fb8b16764f5ccc6ae9843b83fe8d0ee2"
	Nov 20 21:16:42 addons-828342 kubelet[1277]: E1120 21:16:42.510937    1277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6zgsm_kube-system(9b28e075-2521-408c-86c7-38c6b7b056b0)\"" pod="kube-system/registry-creds-764b6fb674-6zgsm" podUID="9b28e075-2521-408c-86c7-38c6b7b056b0"
	Nov 20 21:16:51 addons-828342 kubelet[1277]: I1120 21:16:51.954135    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-k8tlb" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:53 addons-828342 kubelet[1277]: I1120 21:16:53.954960    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6zgsm" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:53 addons-828342 kubelet[1277]: I1120 21:16:53.955078    1277 scope.go:117] "RemoveContainer" containerID="ea3702a650be11854f3a4f6a4dd40aa6fb8b16764f5ccc6ae9843b83fe8d0ee2"
	Nov 20 21:16:54 addons-828342 kubelet[1277]: I1120 21:16:54.555520    1277 scope.go:117] "RemoveContainer" containerID="ea3702a650be11854f3a4f6a4dd40aa6fb8b16764f5ccc6ae9843b83fe8d0ee2"
	Nov 20 21:16:54 addons-828342 kubelet[1277]: I1120 21:16:54.555873    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6zgsm" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 21:16:54 addons-828342 kubelet[1277]: I1120 21:16:54.555926    1277 scope.go:117] "RemoveContainer" containerID="4c1c5ac17fded3430d125b33962a3033f14782811d9709f0b1352d677e65e14e"
	Nov 20 21:16:54 addons-828342 kubelet[1277]: E1120 21:16:54.556079    1277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6zgsm_kube-system(9b28e075-2521-408c-86c7-38c6b7b056b0)\"" pod="kube-system/registry-creds-764b6fb674-6zgsm" podUID="9b28e075-2521-408c-86c7-38c6b7b056b0"
	Nov 20 21:17:02 addons-828342 kubelet[1277]: I1120 21:17:02.573559    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvs8\" (UniqueName: \"kubernetes.io/projected/1ffd3f47-9f81-4d26-8043-13e307ceb54f-kube-api-access-hzvs8\") pod \"hello-world-app-5d498dc89-72ktx\" (UID: \"1ffd3f47-9f81-4d26-8043-13e307ceb54f\") " pod="default/hello-world-app-5d498dc89-72ktx"
	Nov 20 21:17:02 addons-828342 kubelet[1277]: I1120 21:17:02.574224    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1ffd3f47-9f81-4d26-8043-13e307ceb54f-gcp-creds\") pod \"hello-world-app-5d498dc89-72ktx\" (UID: \"1ffd3f47-9f81-4d26-8043-13e307ceb54f\") " pod="default/hello-world-app-5d498dc89-72ktx"
	Nov 20 21:17:02 addons-828342 kubelet[1277]: W1120 21:17:02.943348    1277 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f/crio-2f8ef929031042b430e93db349bea7e909561be25aa0290da8b3bafdeba44c99 WatchSource:0}: Error finding container 2f8ef929031042b430e93db349bea7e909561be25aa0290da8b3bafdeba44c99: Status 404 returned error can't find the container with id 2f8ef929031042b430e93db349bea7e909561be25aa0290da8b3bafdeba44c99
	Nov 20 21:17:04 addons-828342 kubelet[1277]: I1120 21:17:04.642281    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-72ktx" podStartSLOduration=2.016651215 podStartE2EDuration="2.642249473s" podCreationTimestamp="2025-11-20 21:17:02 +0000 UTC" firstStartedPulling="2025-11-20 21:17:02.952158827 +0000 UTC m=+322.141811861" lastFinishedPulling="2025-11-20 21:17:03.577757093 +0000 UTC m=+322.767410119" observedRunningTime="2025-11-20 21:17:04.641749471 +0000 UTC m=+323.831403013" watchObservedRunningTime="2025-11-20 21:17:04.642249473 +0000 UTC m=+323.831902507"
	
	
	==> storage-provisioner [c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc] <==
	W1120 21:16:39.928621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:41.931721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:41.938497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:43.941109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:43.948197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:45.951009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:45.958612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:47.961434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:47.966112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:49.969662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:49.977925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:51.980710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:51.987790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:53.995511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:54.004839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:56.009162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:56.014236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:58.017820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:16:58.023012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:17:00.030270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:17:00.053270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:17:02.067747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:17:02.073192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:17:04.087237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:17:04.097119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-828342 -n addons-828342
helpers_test.go:269: (dbg) Run:  kubectl --context addons-828342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-jxltn ingress-nginx-admission-patch-n279x
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-828342 describe pod ingress-nginx-admission-create-jxltn ingress-nginx-admission-patch-n279x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-828342 describe pod ingress-nginx-admission-create-jxltn ingress-nginx-admission-patch-n279x: exit status 1 (93.141129ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jxltn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-n279x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-828342 describe pod ingress-nginx-admission-create-jxltn ingress-nginx-admission-patch-n279x: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (319.43766ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:17:05.963159  847277 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:17:05.963968  847277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:17:05.964026  847277 out.go:374] Setting ErrFile to fd 2...
	I1120 21:17:05.964048  847277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:17:05.964413  847277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:17:05.964863  847277 mustload.go:66] Loading cluster: addons-828342
	I1120 21:17:05.965372  847277 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:17:05.965415  847277 addons.go:607] checking whether the cluster is paused
	I1120 21:17:05.965607  847277 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:17:05.965640  847277 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:17:05.966195  847277 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:17:05.999843  847277 ssh_runner.go:195] Run: systemctl --version
	I1120 21:17:05.999910  847277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:17:06.023615  847277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:17:06.142485  847277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:17:06.142590  847277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:17:06.178148  847277 cri.go:89] found id: "4c1c5ac17fded3430d125b33962a3033f14782811d9709f0b1352d677e65e14e"
	I1120 21:17:06.178174  847277 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:17:06.178179  847277 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:17:06.178184  847277 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:17:06.178188  847277 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:17:06.178203  847277 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:17:06.178207  847277 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:17:06.178211  847277 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:17:06.178215  847277 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:17:06.178225  847277 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:17:06.178229  847277 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:17:06.178233  847277 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:17:06.178240  847277 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:17:06.178248  847277 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:17:06.178251  847277 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:17:06.178257  847277 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:17:06.178263  847277 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:17:06.178268  847277 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:17:06.178272  847277 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:17:06.178275  847277 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:17:06.178279  847277 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:17:06.178283  847277 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:17:06.178286  847277 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:17:06.178289  847277 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:17:06.178293  847277 cri.go:89] found id: ""
	I1120 21:17:06.178347  847277 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:17:06.193050  847277 out.go:203] 
	W1120 21:17:06.194294  847277 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:17:06.194329  847277 out.go:285] * 
	* 
	W1120 21:17:06.202674  847277 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:17:06.203919  847277 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable ingress --alsologtostderr -v=1: exit status 11 (263.393256ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:17:06.262478  847389 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:17:06.263299  847389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:17:06.263345  847389 out.go:374] Setting ErrFile to fd 2...
	I1120 21:17:06.263366  847389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:17:06.263678  847389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:17:06.264029  847389 mustload.go:66] Loading cluster: addons-828342
	I1120 21:17:06.264559  847389 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:17:06.264602  847389 addons.go:607] checking whether the cluster is paused
	I1120 21:17:06.264751  847389 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:17:06.264784  847389 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:17:06.265397  847389 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:17:06.283967  847389 ssh_runner.go:195] Run: systemctl --version
	I1120 21:17:06.284024  847389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:17:06.302242  847389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:17:06.405956  847389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:17:06.406053  847389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:17:06.443823  847389 cri.go:89] found id: "4c1c5ac17fded3430d125b33962a3033f14782811d9709f0b1352d677e65e14e"
	I1120 21:17:06.443855  847389 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:17:06.443861  847389 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:17:06.443865  847389 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:17:06.443868  847389 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:17:06.443871  847389 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:17:06.443874  847389 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:17:06.443877  847389 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:17:06.443880  847389 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:17:06.443886  847389 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:17:06.443915  847389 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:17:06.443919  847389 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:17:06.443922  847389 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:17:06.443925  847389 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:17:06.443928  847389 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:17:06.443941  847389 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:17:06.443950  847389 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:17:06.443956  847389 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:17:06.443959  847389 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:17:06.443962  847389 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:17:06.443968  847389 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:17:06.443985  847389 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:17:06.443993  847389 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:17:06.443997  847389 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:17:06.444000  847389 cri.go:89] found id: ""
	I1120 21:17:06.444065  847389 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:17:06.458858  847389 out.go:203] 
	W1120 21:17:06.460132  847389 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:17:06.460167  847389 out.go:285] * 
	* 
	W1120 21:17:06.468324  847389 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:17:06.469479  847389 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-rkcm9" [9a044ae6-5efd-4c82-8b67-537d33ef89d7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003779281s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (266.517049ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:35.884189  845421 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:35.884865  845421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:35.884906  845421 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:35.884927  845421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:35.885224  845421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:35.885559  845421 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:35.885968  845421 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:35.886010  845421 addons.go:607] checking whether the cluster is paused
	I1120 21:14:35.886146  845421 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:35.886182  845421 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:35.886706  845421 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:35.904692  845421 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:35.904747  845421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:35.927170  845421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:36.030354  845421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:36.030505  845421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:36.061644  845421 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:36.061668  845421 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:36.061683  845421 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:36.061687  845421 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:36.061691  845421 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:36.061695  845421 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:36.061698  845421 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:36.061702  845421 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:36.061705  845421 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:36.061710  845421 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:36.061718  845421 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:36.061725  845421 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:36.061729  845421 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:36.061732  845421 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:36.061735  845421 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:36.061740  845421 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:36.061745  845421 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:36.061749  845421 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:36.061753  845421 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:36.061756  845421 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:36.061760  845421 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:36.061763  845421 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:36.061766  845421 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:36.061769  845421 cri.go:89] found id: ""
	I1120 21:14:36.061821  845421 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:36.077146  845421 out.go:203] 
	W1120 21:14:36.079958  845421 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:36.079994  845421 out.go:285] * 
	* 
	W1120 21:14:36.088045  845421 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:36.090852  845421 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.514365ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011756323s
addons_test.go:463: (dbg) Run:  kubectl --context addons-828342 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (341.886038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:41.327983  845638 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:41.328753  845638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:41.328768  845638 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:41.328773  845638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:41.329074  845638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:41.329408  845638 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:41.329816  845638 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:41.329834  845638 addons.go:607] checking whether the cluster is paused
	I1120 21:14:41.329984  845638 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:41.329999  845638 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:41.330490  845638 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:41.348661  845638 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:41.348718  845638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:41.373624  845638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:41.487393  845638 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:41.487495  845638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:41.546312  845638 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:41.546332  845638 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:41.546337  845638 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:41.546341  845638 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:41.546344  845638 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:41.546350  845638 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:41.546359  845638 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:41.546363  845638 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:41.546366  845638 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:41.546373  845638 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:41.546376  845638 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:41.546379  845638 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:41.546382  845638 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:41.546385  845638 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:41.546388  845638 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:41.546393  845638 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:41.546396  845638 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:41.546399  845638 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:41.546402  845638 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:41.546405  845638 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:41.546410  845638 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:41.546413  845638 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:41.546415  845638 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:41.546418  845638 cri.go:89] found id: ""
	I1120 21:14:41.546476  845638 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:41.567517  845638 out.go:203] 
	W1120 21:14:41.571890  845638 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:41.571932  845638 out.go:285] * 
	* 
	W1120 21:14:41.590417  845638 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:41.598222  845638 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1120 21:13:54.608244  836852 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1120 21:13:54.612602  836852 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1120 21:13:54.612631  836852 kapi.go:107] duration metric: took 4.394212ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.40615ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-828342 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-828342 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [ef648b07-79e1-4556-b0f8-cbfe5281ed1d] Pending
helpers_test.go:352: "task-pv-pod" [ef648b07-79e1-4556-b0f8-cbfe5281ed1d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [ef648b07-79e1-4556-b0f8-cbfe5281ed1d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003202138s
addons_test.go:572: (dbg) Run:  kubectl --context addons-828342 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-828342 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-828342 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-828342 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-828342 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-828342 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-828342 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [5bb48df0-5257-4e4d-b5dc-4a9e73fa0764] Pending
helpers_test.go:352: "task-pv-pod-restore" [5bb48df0-5257-4e4d-b5dc-4a9e73fa0764] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [5bb48df0-5257-4e4d-b5dc-4a9e73fa0764] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00825847s
addons_test.go:614: (dbg) Run:  kubectl --context addons-828342 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-828342 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-828342 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (280.122968ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:40.556011  845529 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:40.556918  845529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:40.556985  845529 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:40.557013  845529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:40.557440  845529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:40.557812  845529 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:40.558261  845529 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:40.558349  845529 addons.go:607] checking whether the cluster is paused
	I1120 21:14:40.558507  845529 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:40.558542  845529 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:40.559320  845529 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:40.578815  845529 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:40.578875  845529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:40.596749  845529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:40.701755  845529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:40.701846  845529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:40.738009  845529 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:40.738032  845529 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:40.738037  845529 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:40.738040  845529 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:40.738044  845529 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:40.738048  845529 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:40.738051  845529 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:40.738054  845529 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:40.738058  845529 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:40.738066  845529 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:40.738070  845529 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:40.738073  845529 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:40.738076  845529 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:40.738081  845529 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:40.738093  845529 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:40.738102  845529 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:40.738106  845529 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:40.738111  845529 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:40.738114  845529 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:40.738117  845529 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:40.738121  845529 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:40.738127  845529 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:40.738130  845529 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:40.738133  845529 cri.go:89] found id: ""
	I1120 21:14:40.738183  845529 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:40.753399  845529 out.go:203] 
	W1120 21:14:40.756383  845529 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:40.756414  845529 out.go:285] * 
	* 
	W1120 21:14:40.765529  845529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:40.768468  845529 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (292.278793ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:40.823768  845573 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:40.824593  845573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:40.824633  845573 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:40.824659  845573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:40.824987  845573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:40.825339  845573 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:40.825770  845573 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:40.825817  845573 addons.go:607] checking whether the cluster is paused
	I1120 21:14:40.825953  845573 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:40.825991  845573 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:40.826550  845573 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:40.846272  845573 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:40.846324  845573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:40.865128  845573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:40.978324  845573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:40.978419  845573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:41.021148  845573 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:41.021220  845573 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:41.021239  845573 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:41.021261  845573 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:41.021281  845573 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:41.021316  845573 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:41.021335  845573 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:41.021355  845573 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:41.021375  845573 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:41.021412  845573 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:41.021431  845573 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:41.021451  845573 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:41.021470  845573 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:41.021505  845573 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:41.021525  845573 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:41.021547  845573 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:41.021588  845573 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:41.021609  845573 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:41.021630  845573 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:41.021650  845573 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:41.021692  845573 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:41.021711  845573 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:41.021731  845573 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:41.021757  845573 cri.go:89] found id: ""
	I1120 21:14:41.021837  845573 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:41.044731  845573 out.go:203] 
	W1120 21:14:41.048304  845573 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:41.048399  845573 out.go:285] * 
	* 
	W1120 21:14:41.058601  845573 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:41.061977  845573 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (46.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-828342 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-828342 --alsologtostderr -v=1: exit status 11 (253.251311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:26.608432  844799 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:26.609234  844799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:26.609253  844799 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:26.609258  844799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:26.609547  844799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:26.609947  844799 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:26.610361  844799 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:26.610382  844799 addons.go:607] checking whether the cluster is paused
	I1120 21:14:26.610486  844799 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:26.610502  844799 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:26.610961  844799 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:26.629143  844799 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:26.629264  844799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:26.647374  844799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:26.745626  844799 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:26.745706  844799 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:26.774627  844799 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:26.774647  844799 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:26.774652  844799 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:26.774656  844799 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:26.774659  844799 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:26.774663  844799 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:26.774666  844799 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:26.774670  844799 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:26.774673  844799 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:26.774679  844799 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:26.774683  844799 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:26.774686  844799 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:26.774689  844799 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:26.774692  844799 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:26.774695  844799 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:26.774699  844799 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:26.774703  844799 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:26.774706  844799 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:26.774709  844799 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:26.774712  844799 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:26.774716  844799 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:26.774720  844799 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:26.774723  844799 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:26.774726  844799 cri.go:89] found id: ""
	I1120 21:14:26.774777  844799 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:26.790345  844799 out.go:203] 
	W1120 21:14:26.793243  844799 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:26.793270  844799 out.go:285] * 
	* 
	W1120 21:14:26.801211  844799 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:26.804191  844799 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-828342 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-828342
helpers_test.go:243: (dbg) docker inspect addons-828342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f",
	        "Created": "2025-11-20T21:11:16.147726375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 838012,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:11:16.207148163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f/hostname",
	        "HostsPath": "/var/lib/docker/containers/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f/hosts",
	        "LogPath": "/var/lib/docker/containers/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f/457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f-json.log",
	        "Name": "/addons-828342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-828342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-828342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "457f849792bf1a170b641dbc5e91c7bad77a37a9c196656653764b59d471350f",
	                "LowerDir": "/var/lib/docker/overlay2/9053ca37a57a4f0c5e44cc17d517c8f65999e580d22fddc3f525ff3c20a90aad-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9053ca37a57a4f0c5e44cc17d517c8f65999e580d22fddc3f525ff3c20a90aad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9053ca37a57a4f0c5e44cc17d517c8f65999e580d22fddc3f525ff3c20a90aad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9053ca37a57a4f0c5e44cc17d517c8f65999e580d22fddc3f525ff3c20a90aad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-828342",
	                "Source": "/var/lib/docker/volumes/addons-828342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-828342",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-828342",
	                "name.minikube.sigs.k8s.io": "addons-828342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "180619f4b7334bfb68e04222525e77dcf9ddaa6ac5dc79f2e8b408d065282995",
	            "SandboxKey": "/var/run/docker/netns/180619f4b733",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-828342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:4e:db:48:e9:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69692cf3204c643c9d16d84d2f480a3beb892e409b320e951e971b06bb156b0",
	                    "EndpointID": "9572861d8caf39d0a4902b1cffc6bbabe57b872a7e7008061b7c731d065ec257",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-828342",
	                        "457f849792bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-828342 -n addons-828342
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-828342 logs -n 25: (1.593609775s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-775498 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-775498   │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p download-only-775498                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-775498   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -o=json --download-only -p download-only-395142 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-395142   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p download-only-395142                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-395142   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p download-only-775498                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-775498   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p download-only-395142                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-395142   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ --download-only -p download-docker-294137 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-294137 │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	│ delete  │ -p download-docker-294137                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-294137 │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ --download-only -p binary-mirror-490692 --alsologtostderr --binary-mirror http://127.0.0.1:37155 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-490692   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	│ delete  │ -p binary-mirror-490692                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-490692   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ addons  │ disable dashboard -p addons-828342                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	│ addons  │ enable dashboard -p addons-828342                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	│ start   │ -p addons-828342 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:13 UTC │
	│ addons  │ addons-828342 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │                     │
	│ addons  │ addons-828342 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │                     │
	│ addons  │ addons-828342 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │                     │
	│ ip      │ addons-828342 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ addons  │ addons-828342 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ addons-828342 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ addons-828342 ssh cat /opt/local-path-provisioner/pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ addons  │ addons-828342 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ addons-828342 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ addons  │ enable headlamp -p addons-828342 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-828342          │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:10:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:10:50.054958  837622 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:10:50.055241  837622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:50.055250  837622 out.go:374] Setting ErrFile to fd 2...
	I1120 21:10:50.055255  837622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:50.055628  837622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:10:50.056187  837622 out.go:368] Setting JSON to false
	I1120 21:10:50.057117  837622 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13975,"bootTime":1763659075,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:10:50.057218  837622 start.go:143] virtualization:  
	I1120 21:10:50.060740  837622 out.go:179] * [addons-828342] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:10:50.064443  837622 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:10:50.064592  837622 notify.go:221] Checking for updates...
	I1120 21:10:50.070409  837622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:10:50.073470  837622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:10:50.076316  837622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:10:50.079258  837622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:10:50.082099  837622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:10:50.085178  837622 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:10:50.113578  837622 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:10:50.113706  837622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:50.180368  837622 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-20 21:10:50.170691246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:50.180484  837622 docker.go:319] overlay module found
	I1120 21:10:50.183511  837622 out.go:179] * Using the docker driver based on user configuration
	I1120 21:10:50.186261  837622 start.go:309] selected driver: docker
	I1120 21:10:50.186285  837622 start.go:930] validating driver "docker" against <nil>
	I1120 21:10:50.186300  837622 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:10:50.187053  837622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:50.249914  837622 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-20 21:10:50.240979965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:50.250073  837622 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:10:50.250322  837622 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:10:50.253259  837622 out.go:179] * Using Docker driver with root privileges
	I1120 21:10:50.256034  837622 cni.go:84] Creating CNI manager for ""
	I1120 21:10:50.256098  837622 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:10:50.256113  837622 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:10:50.256202  837622 start.go:353] cluster config:
	{Name:addons-828342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1120 21:10:50.259298  837622 out.go:179] * Starting "addons-828342" primary control-plane node in "addons-828342" cluster
	I1120 21:10:50.262136  837622 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:10:50.265133  837622 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:10:50.267986  837622 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:10:50.268035  837622 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:10:50.268048  837622 cache.go:65] Caching tarball of preloaded images
	I1120 21:10:50.268062  837622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:10:50.268132  837622 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:10:50.268142  837622 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:10:50.268475  837622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/config.json ...
	I1120 21:10:50.268499  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/config.json: {Name:mk3184c7dba130c932bc9e5294a677adb27e05fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:50.283287  837622 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 21:10:50.283396  837622 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 21:10:50.283421  837622 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1120 21:10:50.283433  837622 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1120 21:10:50.283441  837622 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1120 21:10:50.283447  837622 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1120 21:11:08.260687  837622 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1120 21:11:08.260733  837622 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:11:08.260763  837622 start.go:360] acquireMachinesLock for addons-828342: {Name:mk557b86f17357107ee0584eb0543209b8fb35ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:11:08.261617  837622 start.go:364] duration metric: took 825.93µs to acquireMachinesLock for "addons-828342"
	I1120 21:11:08.261664  837622 start.go:93] Provisioning new machine with config: &{Name:addons-828342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:11:08.261753  837622 start.go:125] createHost starting for "" (driver="docker")
	I1120 21:11:08.265299  837622 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1120 21:11:08.265547  837622 start.go:159] libmachine.API.Create for "addons-828342" (driver="docker")
	I1120 21:11:08.265587  837622 client.go:173] LocalClient.Create starting
	I1120 21:11:08.265726  837622 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem
	I1120 21:11:08.824195  837622 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem
	I1120 21:11:09.355611  837622 cli_runner.go:164] Run: docker network inspect addons-828342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:11:09.370473  837622 cli_runner.go:211] docker network inspect addons-828342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:11:09.370557  837622 network_create.go:284] running [docker network inspect addons-828342] to gather additional debugging logs...
	I1120 21:11:09.370574  837622 cli_runner.go:164] Run: docker network inspect addons-828342
	W1120 21:11:09.386772  837622 cli_runner.go:211] docker network inspect addons-828342 returned with exit code 1
	I1120 21:11:09.386799  837622 network_create.go:287] error running [docker network inspect addons-828342]: docker network inspect addons-828342: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-828342 not found
	I1120 21:11:09.386813  837622 network_create.go:289] output of [docker network inspect addons-828342]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-828342 not found
	
	** /stderr **
	I1120 21:11:09.386921  837622 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:11:09.403697  837622 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bfc5f0}
	I1120 21:11:09.403749  837622 network_create.go:124] attempt to create docker network addons-828342 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1120 21:11:09.403823  837622 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-828342 addons-828342
	I1120 21:11:09.462507  837622 network_create.go:108] docker network addons-828342 192.168.49.0/24 created
	I1120 21:11:09.462536  837622 kic.go:121] calculated static IP "192.168.49.2" for the "addons-828342" container
	I1120 21:11:09.462610  837622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:11:09.477741  837622 cli_runner.go:164] Run: docker volume create addons-828342 --label name.minikube.sigs.k8s.io=addons-828342 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:11:09.495502  837622 oci.go:103] Successfully created a docker volume addons-828342
	I1120 21:11:09.495610  837622 cli_runner.go:164] Run: docker run --rm --name addons-828342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-828342 --entrypoint /usr/bin/test -v addons-828342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:11:11.675805  837622 cli_runner.go:217] Completed: docker run --rm --name addons-828342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-828342 --entrypoint /usr/bin/test -v addons-828342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (2.180154992s)
	I1120 21:11:11.675835  837622 oci.go:107] Successfully prepared a docker volume addons-828342
	I1120 21:11:11.675899  837622 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:11:11.675909  837622 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:11:11.675970  837622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-828342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 21:11:16.078010  837622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-828342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.401989429s)
	I1120 21:11:16.078047  837622 kic.go:203] duration metric: took 4.402133751s to extract preloaded images to volume ...
	W1120 21:11:16.078192  837622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 21:11:16.078308  837622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:11:16.132622  837622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-828342 --name addons-828342 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-828342 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-828342 --network addons-828342 --ip 192.168.49.2 --volume addons-828342:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:11:16.408930  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Running}}
	I1120 21:11:16.433430  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:16.454484  837622 cli_runner.go:164] Run: docker exec addons-828342 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:11:16.527681  837622 oci.go:144] the created container "addons-828342" has a running status.
	I1120 21:11:16.527710  837622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa...
	I1120 21:11:16.862588  837622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:11:16.895422  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:16.924998  837622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:11:16.925017  837622 kic_runner.go:114] Args: [docker exec --privileged addons-828342 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:11:16.982497  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:17.008534  837622 machine.go:94] provisionDockerMachine start ...
	I1120 21:11:17.008648  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:17.037831  837622 main.go:143] libmachine: Using SSH client type: native
	I1120 21:11:17.038164  837622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1120 21:11:17.038173  837622 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:11:17.038965  837622 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:11:20.183042  837622 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-828342
	
	I1120 21:11:20.183075  837622 ubuntu.go:182] provisioning hostname "addons-828342"
	I1120 21:11:20.183146  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:20.202587  837622 main.go:143] libmachine: Using SSH client type: native
	I1120 21:11:20.202911  837622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1120 21:11:20.202928  837622 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-828342 && echo "addons-828342" | sudo tee /etc/hostname
	I1120 21:11:20.353146  837622 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-828342
	
	I1120 21:11:20.353247  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:20.371232  837622 main.go:143] libmachine: Using SSH client type: native
	I1120 21:11:20.371552  837622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1120 21:11:20.371573  837622 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-828342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-828342/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-828342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:11:20.515286  837622 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:11:20.515312  837622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:11:20.515330  837622 ubuntu.go:190] setting up certificates
	I1120 21:11:20.515339  837622 provision.go:84] configureAuth start
	I1120 21:11:20.515399  837622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-828342
	I1120 21:11:20.532775  837622 provision.go:143] copyHostCerts
	I1120 21:11:20.532885  837622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:11:20.533018  837622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:11:20.533081  837622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:11:20.533135  837622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.addons-828342 san=[127.0.0.1 192.168.49.2 addons-828342 localhost minikube]
	I1120 21:11:20.943141  837622 provision.go:177] copyRemoteCerts
	I1120 21:11:20.943211  837622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:11:20.943256  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:20.960125  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.063023  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:11:21.083949  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:11:21.101446  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:11:21.119171  837622 provision.go:87] duration metric: took 603.796423ms to configureAuth
	I1120 21:11:21.119196  837622 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:11:21.119414  837622 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:11:21.119525  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.136394  837622 main.go:143] libmachine: Using SSH client type: native
	I1120 21:11:21.136716  837622 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1120 21:11:21.136737  837622 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:11:21.423233  837622 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:11:21.423302  837622 machine.go:97] duration metric: took 4.41474688s to provisionDockerMachine
	I1120 21:11:21.423326  837622 client.go:176] duration metric: took 13.157728421s to LocalClient.Create
	I1120 21:11:21.423379  837622 start.go:167] duration metric: took 13.157833128s to libmachine.API.Create "addons-828342"
	I1120 21:11:21.423406  837622 start.go:293] postStartSetup for "addons-828342" (driver="docker")
	I1120 21:11:21.423435  837622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:11:21.423538  837622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:11:21.423667  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.441324  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.547226  837622 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:11:21.550585  837622 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:11:21.550618  837622 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:11:21.550630  837622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:11:21.550700  837622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:11:21.550727  837622 start.go:296] duration metric: took 127.297465ms for postStartSetup
	I1120 21:11:21.551074  837622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-828342
	I1120 21:11:21.567974  837622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/config.json ...
	I1120 21:11:21.568289  837622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:11:21.568345  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.584899  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.688088  837622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:11:21.692877  837622 start.go:128] duration metric: took 13.431107515s to createHost
	I1120 21:11:21.692902  837622 start.go:83] releasing machines lock for "addons-828342", held for 13.431262027s
	I1120 21:11:21.692983  837622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-828342
	I1120 21:11:21.709594  837622 ssh_runner.go:195] Run: cat /version.json
	I1120 21:11:21.709654  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.709913  837622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:11:21.709973  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:21.733623  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.744453  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:21.830567  837622 ssh_runner.go:195] Run: systemctl --version
	I1120 21:11:21.923904  837622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:11:21.961071  837622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:11:21.965728  837622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:11:21.965825  837622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:11:21.995148  837622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 21:11:21.995230  837622 start.go:496] detecting cgroup driver to use...
	I1120 21:11:21.995271  837622 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:11:21.995338  837622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:11:22.013503  837622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:11:22.026950  837622 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:11:22.027042  837622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:11:22.046235  837622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:11:22.066313  837622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:11:22.198451  837622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:11:22.343165  837622 docker.go:234] disabling docker service ...
	I1120 21:11:22.343238  837622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:11:22.373117  837622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:11:22.386952  837622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:11:22.516711  837622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:11:22.646385  837622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:11:22.659619  837622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:11:22.675169  837622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:11:22.675266  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.684678  837622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:11:22.684756  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.694494  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.703990  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.713067  837622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:11:22.721623  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.730753  837622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.745300  837622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:11:22.754212  837622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:11:22.761767  837622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:11:22.769142  837622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:22.890868  837622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:11:23.082409  837622 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:11:23.082551  837622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:11:23.086525  837622 start.go:564] Will wait 60s for crictl version
	I1120 21:11:23.086593  837622 ssh_runner.go:195] Run: which crictl
	I1120 21:11:23.090051  837622 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:11:23.114234  837622 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:11:23.114344  837622 ssh_runner.go:195] Run: crio --version
	I1120 21:11:23.142228  837622 ssh_runner.go:195] Run: crio --version
	I1120 21:11:23.176513  837622 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:11:23.179225  837622 cli_runner.go:164] Run: docker network inspect addons-828342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:11:23.194361  837622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:11:23.198186  837622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:11:23.207660  837622 kubeadm.go:884] updating cluster {Name:addons-828342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:11:23.207772  837622 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:11:23.207829  837622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:11:23.239884  837622 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:11:23.239910  837622 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:11:23.239968  837622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:11:23.269224  837622 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:11:23.269250  837622 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:11:23.269258  837622 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 21:11:23.269358  837622 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-828342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:11:23.269443  837622 ssh_runner.go:195] Run: crio config
	I1120 21:11:23.333177  837622 cni.go:84] Creating CNI manager for ""
	I1120 21:11:23.333219  837622 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:11:23.333243  837622 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:11:23.333277  837622 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-828342 NodeName:addons-828342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:11:23.333408  837622 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-828342"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:11:23.333494  837622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:11:23.341470  837622 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:11:23.341568  837622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:11:23.349509  837622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:11:23.362477  837622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:11:23.377923  837622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1120 21:11:23.396608  837622 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:11:23.402531  837622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:11:23.412472  837622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:23.529843  837622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:11:23.550029  837622 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342 for IP: 192.168.49.2
	I1120 21:11:23.550054  837622 certs.go:195] generating shared ca certs ...
	I1120 21:11:23.550072  837622 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:23.550215  837622 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:11:24.108256  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt ...
	I1120 21:11:24.108289  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt: {Name:mk99b4138ffbdd521ade86fe93e2ecb16a119bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:24.109124  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key ...
	I1120 21:11:24.109142  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key: {Name:mk9989a3516add42f4cc91a43b4f457a4ffe45b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:24.109804  837622 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:11:24.659335  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt ...
	I1120 21:11:24.659369  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt: {Name:mk17edb29508da4a28dfe448254668558046171c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:24.659556  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key ...
	I1120 21:11:24.659570  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key: {Name:mk8712b76bb21a33d7d0a56aadaf09a5974dd74e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:24.659653  837622 certs.go:257] generating profile certs ...
	I1120 21:11:24.659723  837622 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.key
	I1120 21:11:24.659743  837622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt with IP's: []
	I1120 21:11:25.231610  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt ...
	I1120 21:11:25.231644  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: {Name:mk6f16491fb88000ee2dc18919f6827195283bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:25.232502  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.key ...
	I1120 21:11:25.232528  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.key: {Name:mka1352f5a0708566dd0785034fd37ac540dd680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:25.232693  837622 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key.83d65139
	I1120 21:11:25.232733  837622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt.83d65139 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1120 21:11:25.624070  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt.83d65139 ...
	I1120 21:11:25.624103  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt.83d65139: {Name:mkea90d949b0f2fd6ce61d7102d8bda7038f4e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:25.624336  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key.83d65139 ...
	I1120 21:11:25.624356  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key.83d65139: {Name:mk1ae3bfad5daff18aeebf26340ca9af94a3bb82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:25.624440  837622 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt.83d65139 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt
	I1120 21:11:25.624530  837622 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key.83d65139 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key
	I1120 21:11:25.624587  837622 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.key
	I1120 21:11:25.624607  837622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.crt with IP's: []
	I1120 21:11:26.154993  837622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.crt ...
	I1120 21:11:26.155025  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.crt: {Name:mke6860b26526f96f3ed5f02e152067209959fd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:26.155220  837622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.key ...
	I1120 21:11:26.155235  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.key: {Name:mkb1be3640fc1c7d774719dd0e365c182c9c4b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:26.156088  837622 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:11:26.156137  837622 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:11:26.156162  837622 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:11:26.156190  837622 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:11:26.156751  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:11:26.176749  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:11:26.194526  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:11:26.212154  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:11:26.230353  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 21:11:26.248408  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:11:26.265975  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:11:26.283630  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:11:26.301383  837622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:11:26.318726  837622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:11:26.333197  837622 ssh_runner.go:195] Run: openssl version
	I1120 21:11:26.339572  837622 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:11:26.347109  837622 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:11:26.354724  837622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:11:26.358500  837622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:11:26.358566  837622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:11:26.400044  837622 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:11:26.407484  837622 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:11:26.414900  837622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:11:26.418549  837622 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:11:26.418600  837622 kubeadm.go:401] StartCluster: {Name:addons-828342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-828342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:11:26.418670  837622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:11:26.418749  837622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:11:26.450638  837622 cri.go:89] found id: ""
	I1120 21:11:26.450709  837622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:11:26.458709  837622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:11:26.466609  837622 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:11:26.466678  837622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:11:26.474613  837622 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:11:26.474635  837622 kubeadm.go:158] found existing configuration files:
	
	I1120 21:11:26.474707  837622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:11:26.482763  837622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:11:26.482836  837622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:11:26.490396  837622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:11:26.498477  837622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:11:26.498564  837622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:11:26.506167  837622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:11:26.514140  837622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:11:26.514265  837622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:11:26.522115  837622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:11:26.529870  837622 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:11:26.529934  837622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:11:26.537386  837622 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:11:26.581585  837622 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:11:26.581733  837622 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:11:26.602561  837622 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:11:26.602652  837622 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 21:11:26.602699  837622 kubeadm.go:319] OS: Linux
	I1120 21:11:26.602747  837622 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:11:26.602798  837622 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 21:11:26.602848  837622 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:11:26.602899  837622 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:11:26.602949  837622 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:11:26.603019  837622 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:11:26.603070  837622 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:11:26.603127  837622 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:11:26.603179  837622 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 21:11:26.676866  837622 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:11:26.677043  837622 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:11:26.677170  837622 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:11:26.691901  837622 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:11:26.697987  837622 out.go:252]   - Generating certificates and keys ...
	I1120 21:11:26.698173  837622 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:11:26.698301  837622 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:11:26.862024  837622 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:11:27.357948  837622 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:11:28.306939  837622 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:11:28.878469  837622 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:11:29.130588  837622 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:11:29.130742  837622 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-828342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1120 21:11:29.550693  837622 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:11:29.551042  837622 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-828342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1120 21:11:30.129271  837622 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:11:30.812324  837622 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:11:31.045202  837622 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:11:31.045498  837622 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:11:31.256100  837622 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:11:31.510179  837622 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:11:31.722489  837622 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:11:32.555753  837622 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:11:33.455597  837622 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:11:33.456613  837622 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:11:33.459666  837622 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:11:33.463285  837622 out.go:252]   - Booting up control plane ...
	I1120 21:11:33.463389  837622 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:11:33.463471  837622 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:11:33.464907  837622 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:11:33.484594  837622 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:11:33.484707  837622 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:11:33.493519  837622 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:11:33.494939  837622 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:11:33.495247  837622 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:11:33.635424  837622 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:11:33.635549  837622 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:11:34.143472  837622 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 508.694527ms
	I1120 21:11:34.146894  837622 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:11:34.147383  837622 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1120 21:11:34.148222  837622 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:11:34.148561  837622 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:11:37.582117  837622 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.433143937s
	I1120 21:11:38.351938  837622 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.202423482s
	I1120 21:11:40.149490  837622 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001605498s
	I1120 21:11:40.169549  837622 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:11:40.183206  837622 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:11:40.198626  837622 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:11:40.198840  837622 kubeadm.go:319] [mark-control-plane] Marking the node addons-828342 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:11:40.213111  837622 kubeadm.go:319] [bootstrap-token] Using token: kdkmn8.0zmhcrclk06dr83a
	I1120 21:11:40.216167  837622 out.go:252]   - Configuring RBAC rules ...
	I1120 21:11:40.216305  837622 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:11:40.226084  837622 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:11:40.235160  837622 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:11:40.239573  837622 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:11:40.244002  837622 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:11:40.248395  837622 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:11:40.556721  837622 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:11:41.016074  837622 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:11:41.558208  837622 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:11:41.559514  837622 kubeadm.go:319] 
	I1120 21:11:41.559589  837622 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:11:41.559611  837622 kubeadm.go:319] 
	I1120 21:11:41.559723  837622 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:11:41.559742  837622 kubeadm.go:319] 
	I1120 21:11:41.559774  837622 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:11:41.559862  837622 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:11:41.559952  837622 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:11:41.559958  837622 kubeadm.go:319] 
	I1120 21:11:41.560022  837622 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:11:41.560027  837622 kubeadm.go:319] 
	I1120 21:11:41.560094  837622 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:11:41.560100  837622 kubeadm.go:319] 
	I1120 21:11:41.560163  837622 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:11:41.560244  837622 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:11:41.560315  837622 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:11:41.560320  837622 kubeadm.go:319] 
	I1120 21:11:41.560427  837622 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:11:41.560541  837622 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:11:41.560563  837622 kubeadm.go:319] 
	I1120 21:11:41.560669  837622 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kdkmn8.0zmhcrclk06dr83a \
	I1120 21:11:41.560806  837622 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 \
	I1120 21:11:41.560889  837622 kubeadm.go:319] 	--control-plane 
	I1120 21:11:41.560899  837622 kubeadm.go:319] 
	I1120 21:11:41.561087  837622 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:11:41.561096  837622 kubeadm.go:319] 
	I1120 21:11:41.561191  837622 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kdkmn8.0zmhcrclk06dr83a \
	I1120 21:11:41.561308  837622 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 
	I1120 21:11:41.565203  837622 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 21:11:41.565464  837622 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 21:11:41.565592  837622 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 21:11:41.565615  837622 cni.go:84] Creating CNI manager for ""
	I1120 21:11:41.565631  837622 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:11:41.568897  837622 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:11:41.571851  837622 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:11:41.576161  837622 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:11:41.576185  837622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:11:41.589463  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:11:41.888208  837622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:11:41.888353  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:41.888434  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-828342 minikube.k8s.io/updated_at=2025_11_20T21_11_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=addons-828342 minikube.k8s.io/primary=true
	I1120 21:11:42.038367  837622 ops.go:34] apiserver oom_adj: -16
	I1120 21:11:42.038521  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:42.538895  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:43.039534  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:43.539110  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:44.039237  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:44.538657  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:45.038876  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:45.538941  837622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:45.624646  837622 kubeadm.go:1114] duration metric: took 3.736358353s to wait for elevateKubeSystemPrivileges
	I1120 21:11:45.624692  837622 kubeadm.go:403] duration metric: took 19.206086449s to StartCluster
	I1120 21:11:45.624710  837622 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:45.625450  837622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:11:45.625871  837622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:45.626086  837622 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:11:45.626223  837622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:11:45.626500  837622 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1120 21:11:45.626595  837622 addons.go:70] Setting yakd=true in profile "addons-828342"
	I1120 21:11:45.626616  837622 addons.go:239] Setting addon yakd=true in "addons-828342"
	I1120 21:11:45.626639  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.627123  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.627373  837622 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:11:45.627422  837622 addons.go:70] Setting inspektor-gadget=true in profile "addons-828342"
	I1120 21:11:45.627433  837622 addons.go:239] Setting addon inspektor-gadget=true in "addons-828342"
	I1120 21:11:45.627453  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.627834  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.628065  837622 addons.go:70] Setting metrics-server=true in profile "addons-828342"
	I1120 21:11:45.628086  837622 addons.go:239] Setting addon metrics-server=true in "addons-828342"
	I1120 21:11:45.628110  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.628525  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.628808  837622 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-828342"
	I1120 21:11:45.628873  837622 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-828342"
	I1120 21:11:45.628896  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.629316  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.631517  837622 addons.go:70] Setting cloud-spanner=true in profile "addons-828342"
	I1120 21:11:45.631545  837622 addons.go:239] Setting addon cloud-spanner=true in "addons-828342"
	I1120 21:11:45.631596  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.632078  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.632929  837622 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-828342"
	I1120 21:11:45.632962  837622 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-828342"
	I1120 21:11:45.633001  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.633420  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.636780  837622 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-828342"
	I1120 21:11:45.636858  837622 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-828342"
	I1120 21:11:45.636890  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.637345  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.639846  837622 addons.go:70] Setting registry=true in profile "addons-828342"
	I1120 21:11:45.639877  837622 addons.go:239] Setting addon registry=true in "addons-828342"
	I1120 21:11:45.639924  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.640526  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.647025  837622 addons.go:70] Setting default-storageclass=true in profile "addons-828342"
	I1120 21:11:45.647067  837622 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-828342"
	I1120 21:11:45.647389  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.655091  837622 addons.go:70] Setting registry-creds=true in profile "addons-828342"
	I1120 21:11:45.655122  837622 addons.go:239] Setting addon registry-creds=true in "addons-828342"
	I1120 21:11:45.655157  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.655635  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.660217  837622 addons.go:70] Setting gcp-auth=true in profile "addons-828342"
	I1120 21:11:45.660252  837622 mustload.go:66] Loading cluster: addons-828342
	I1120 21:11:45.660458  837622 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:11:45.660743  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.677052  837622 addons.go:70] Setting ingress=true in profile "addons-828342"
	I1120 21:11:45.677082  837622 addons.go:239] Setting addon ingress=true in "addons-828342"
	I1120 21:11:45.677226  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.677700  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.683581  837622 addons.go:70] Setting storage-provisioner=true in profile "addons-828342"
	I1120 21:11:45.716620  837622 addons.go:239] Setting addon storage-provisioner=true in "addons-828342"
	I1120 21:11:45.716718  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.717477  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.684657  837622 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-828342"
	I1120 21:11:45.742110  837622 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-828342"
	I1120 21:11:45.742828  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.746241  837622 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1120 21:11:45.746484  837622 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1120 21:11:45.763018  837622 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1120 21:11:45.763091  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1120 21:11:45.763202  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.684860  837622 addons.go:70] Setting volcano=true in profile "addons-828342"
	I1120 21:11:45.764017  837622 addons.go:239] Setting addon volcano=true in "addons-828342"
	I1120 21:11:45.764082  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.684907  837622 addons.go:70] Setting volumesnapshots=true in profile "addons-828342"
	I1120 21:11:45.764431  837622 addons.go:239] Setting addon volumesnapshots=true in "addons-828342"
	I1120 21:11:45.764479  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.764941  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.687145  837622 addons.go:70] Setting ingress-dns=true in profile "addons-828342"
	I1120 21:11:45.775957  837622 addons.go:239] Setting addon ingress-dns=true in "addons-828342"
	I1120 21:11:45.776031  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.776551  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.787148  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1120 21:11:45.787218  837622 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1120 21:11:45.787321  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.687189  837622 out.go:179] * Verifying Kubernetes components...
	I1120 21:11:45.805357  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.808254  837622 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1120 21:11:45.810466  837622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:45.827864  837622 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1120 21:11:45.830780  837622 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1120 21:11:45.830810  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1120 21:11:45.830873  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.847049  837622 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1120 21:11:45.847068  837622 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 21:11:45.850709  837622 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 21:11:45.850790  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.865586  837622 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 21:11:45.865607  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1120 21:11:45.865681  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.847073  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1120 21:11:45.888350  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1120 21:11:45.888998  837622 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1120 21:11:45.891370  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1120 21:11:45.891515  837622 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 21:11:45.891529  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1120 21:11:45.891639  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:45.926003  837622 addons.go:239] Setting addon default-storageclass=true in "addons-828342"
	I1120 21:11:45.926066  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.926614  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.927225  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.959592  837622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:11:45.960655  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1120 21:11:45.961793  837622 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 21:11:45.968935  837622 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1120 21:11:45.969045  837622 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1120 21:11:45.975194  837622 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-828342"
	I1120 21:11:45.975252  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:45.975784  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:45.995405  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1120 21:11:45.997808  837622 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1120 21:11:45.998173  837622 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 21:11:45.998188  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1120 21:11:45.998255  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.023494  837622 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:11:46.030443  837622 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:46.030522  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:11:46.030638  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.037952  837622 out.go:179]   - Using image docker.io/registry:3.0.0
	I1120 21:11:46.043151  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1120 21:11:46.043331  837622 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1120 21:11:46.043354  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1120 21:11:46.043424  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.061114  837622 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 21:11:46.061522  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.067757  837622 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 21:11:46.067781  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1120 21:11:46.067853  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.071162  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1120 21:11:46.074108  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1120 21:11:46.081022  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1120 21:11:46.081058  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1120 21:11:46.081140  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.094107  837622 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1120 21:11:46.096970  837622 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 21:11:46.096994  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1120 21:11:46.097062  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.104788  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.116887  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.124000  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	W1120 21:11:46.124089  837622 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1120 21:11:46.127178  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.135613  837622 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1120 21:11:46.138586  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1120 21:11:46.138613  837622 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1120 21:11:46.138692  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.138843  837622 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1120 21:11:46.141759  837622 out.go:179]   - Using image docker.io/busybox:stable
	I1120 21:11:46.147697  837622 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 21:11:46.147731  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1120 21:11:46.147797  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.191035  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.223114  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.225100  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.250745  837622 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:46.250766  837622 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:11:46.250832  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:46.264440  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.264999  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.286589  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.288002  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.299241  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.307833  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	W1120 21:11:46.311464  837622 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 21:11:46.311508  837622 retry.go:31] will retry after 223.009585ms: ssh: handshake failed: EOF
	I1120 21:11:46.312725  837622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:11:46.333882  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:46.817370  837622 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1120 21:11:46.817396  837622 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1120 21:11:46.999049  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1120 21:11:47.008875  837622 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 21:11:47.008900  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1120 21:11:47.073124  837622 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1120 21:11:47.073151  837622 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1120 21:11:47.083135  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1120 21:11:47.121400  837622 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1120 21:11:47.121425  837622 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1120 21:11:47.146676  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 21:11:47.165061  837622 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 21:11:47.165087  837622 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 21:11:47.179014  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 21:11:47.198387  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1120 21:11:47.198413  837622 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1120 21:11:47.214671  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 21:11:47.219846  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1120 21:11:47.219871  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1120 21:11:47.262047  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 21:11:47.276469  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 21:11:47.304784  837622 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 21:11:47.304815  837622 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 21:11:47.327442  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:47.332376  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:47.338398  837622 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1120 21:11:47.338421  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1120 21:11:47.361299  837622 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1120 21:11:47.361324  837622 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1120 21:11:47.398387  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1120 21:11:47.398413  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1120 21:11:47.461930  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1120 21:11:47.461954  837622 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1120 21:11:47.516328  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 21:11:47.536784  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 21:11:47.541305  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1120 21:11:47.575058  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1120 21:11:47.575081  837622 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1120 21:11:47.674765  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1120 21:11:47.674791  837622 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1120 21:11:47.677666  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1120 21:11:47.677691  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1120 21:11:47.700335  837622 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.740705357s)
	I1120 21:11:47.700364  837622 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1120 21:11:47.701331  837622 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.388578289s)
	I1120 21:11:47.701945  837622 node_ready.go:35] waiting up to 6m0s for node "addons-828342" to be "Ready" ...
	I1120 21:11:47.774161  837622 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 21:11:47.774185  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1120 21:11:47.902719  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1120 21:11:47.902745  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1120 21:11:47.930807  837622 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1120 21:11:47.930831  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1120 21:11:47.963580  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 21:11:48.109221  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1120 21:11:48.137479  837622 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1120 21:11:48.137556  837622 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1120 21:11:48.205867  837622 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-828342" context rescaled to 1 replicas
	I1120 21:11:48.392428  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1120 21:11:48.392503  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1120 21:11:48.601064  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1120 21:11:48.601142  837622 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1120 21:11:48.864932  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1120 21:11:48.864953  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1120 21:11:48.886375  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1120 21:11:48.886396  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1120 21:11:48.902377  837622 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 21:11:48.902398  837622 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1120 21:11:48.920137  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1120 21:11:49.745093  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:50.523043  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.439869235s)
	I1120 21:11:50.523107  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.376412558s)
	I1120 21:11:50.523130  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.344098554s)
	I1120 21:11:50.523150  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.308459104s)
	I1120 21:11:50.523224  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.524092253s)
	W1120 21:11:52.220089  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:52.224973  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.962888478s)
	I1120 21:11:52.225003  837622 addons.go:480] Verifying addon ingress=true in "addons-828342"
	I1120 21:11:52.225242  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.948742376s)
	I1120 21:11:52.225299  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.897835687s)
	I1120 21:11:52.225546  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.893143232s)
	I1120 21:11:52.225608  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.709254658s)
	I1120 21:11:52.225617  837622 addons.go:480] Verifying addon metrics-server=true in "addons-828342"
	I1120 21:11:52.225660  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.688850881s)
	I1120 21:11:52.225762  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.684432841s)
	I1120 21:11:52.225770  837622 addons.go:480] Verifying addon registry=true in "addons-828342"
	I1120 21:11:52.226189  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.262572736s)
	W1120 21:11:52.226217  837622 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 21:11:52.226233  837622 retry.go:31] will retry after 225.039773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 21:11:52.226273  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.117023249s)
	I1120 21:11:52.228518  837622 out.go:179] * Verifying ingress addon...
	I1120 21:11:52.230681  837622 out.go:179] * Verifying registry addon...
	I1120 21:11:52.232657  837622 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-828342 service yakd-dashboard -n yakd-dashboard
	
	I1120 21:11:52.233509  837622 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1120 21:11:52.236542  837622 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1120 21:11:52.244165  837622 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 21:11:52.244193  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:52.248815  837622 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 21:11:52.248861  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1120 21:11:52.263333  837622 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1120 21:11:52.451562  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 21:11:52.525647  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.605423152s)
	I1120 21:11:52.525680  837622 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-828342"
	I1120 21:11:52.528007  837622 out.go:179] * Verifying csi-hostpath-driver addon...
	I1120 21:11:52.531731  837622 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1120 21:11:52.546541  837622 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 21:11:52.546562  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:52.740006  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:52.740762  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:53.035695  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:53.238822  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:53.239087  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:53.535514  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:53.634738  837622 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1120 21:11:53.634854  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:53.652131  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:53.737382  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:53.739068  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:53.761217  837622 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1120 21:11:53.775492  837622 addons.go:239] Setting addon gcp-auth=true in "addons-828342"
	I1120 21:11:53.775544  837622 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:11:53.775999  837622 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:11:53.793047  837622 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1120 21:11:53.793107  837622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:11:53.811634  837622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:11:54.035752  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:54.237901  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:54.239177  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:54.537038  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:11:54.705098  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:54.737809  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:54.740298  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:55.035669  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:55.158637  837622 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.707017688s)
	I1120 21:11:55.158776  837622 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.365585263s)
	I1120 21:11:55.161611  837622 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 21:11:55.164439  837622 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1120 21:11:55.167378  837622 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1120 21:11:55.167402  837622 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1120 21:11:55.181031  837622 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1120 21:11:55.181055  837622 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1120 21:11:55.194157  837622 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 21:11:55.194190  837622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1120 21:11:55.209775  837622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 21:11:55.237647  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:55.240206  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:55.535582  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:55.721759  837622 addons.go:480] Verifying addon gcp-auth=true in "addons-828342"
	I1120 21:11:55.724880  837622 out.go:179] * Verifying gcp-auth addon...
	I1120 21:11:55.729408  837622 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1120 21:11:55.732278  837622 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1120 21:11:55.732305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:55.740599  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:55.741555  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:56.035183  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:56.232626  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:56.237394  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:56.239284  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:56.535467  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:11:56.705624  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:56.732800  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:56.736329  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:56.739571  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:57.035122  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:57.232609  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:57.239226  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:57.239796  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:57.535045  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:57.732438  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:57.736721  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:57.747375  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:58.035807  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:58.232550  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:58.236176  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:58.239701  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:58.536089  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:58.733078  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:58.736593  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:58.739166  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:59.035694  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:11:59.205282  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:11:59.233230  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:59.237244  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:59.239029  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:11:59.535541  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:11:59.732254  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:11:59.736838  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:11:59.739223  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:00.047865  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:00.247149  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:00.250410  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:00.253641  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:00.535573  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:00.733229  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:00.737624  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:00.739660  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:01.035563  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:01.205794  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:01.232668  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:01.236799  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:01.239380  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:01.537258  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:01.733017  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:01.738084  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:01.740280  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:02.035769  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:02.232948  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:02.236925  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:02.239640  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:02.535566  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:02.732210  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:02.737255  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:02.739613  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:03.035818  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:03.232947  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:03.236770  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:03.239291  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:03.535581  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:03.705384  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:03.732177  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:03.737220  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:03.739681  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:04.034630  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:04.232499  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:04.237330  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:04.239237  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:04.536287  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:04.733182  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:04.737084  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:04.739113  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:05.036080  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:05.233493  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:05.237230  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:05.239266  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:05.535585  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:05.705649  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:05.732546  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:05.736491  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:05.740247  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:06.034801  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:06.232630  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:06.237716  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:06.240005  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:06.536217  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:06.733285  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:06.738239  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:06.739975  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:07.036406  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:07.233027  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:07.236458  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:07.241356  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:07.535619  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:07.732063  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:07.736927  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:07.739120  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:08.035811  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:08.204728  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:08.232715  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:08.236381  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:08.239990  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:08.535564  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:08.732082  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:08.737108  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:08.739330  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:09.036283  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:09.233189  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:09.236560  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:09.240014  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:09.536134  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:09.732671  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:09.736189  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:09.739556  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:10.035225  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:10.205320  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:10.233725  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:10.236184  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:10.239773  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:10.535632  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:10.732679  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:10.737225  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:10.739254  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:11.036019  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:11.232718  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:11.236630  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:11.238845  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:11.535248  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:11.732180  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:11.737847  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:11.739990  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:12.035575  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:12.233060  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:12.236613  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:12.238868  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:12.535931  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:12.704872  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:12.732964  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:12.736357  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:12.739686  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:13.035679  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:13.232712  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:13.236592  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:13.239270  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:13.535306  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:13.732907  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:13.736587  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:13.739648  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:14.034887  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:14.233189  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:14.237249  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:14.239535  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:14.536305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:14.705240  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:14.733140  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:14.736641  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:14.739111  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:15.037305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:15.232599  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:15.237550  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:15.239453  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:15.536469  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:15.732555  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:15.736945  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:15.739113  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:16.035782  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:16.233699  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:16.236389  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:16.239757  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:16.535813  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:16.705738  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:16.732899  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:16.736449  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:16.740017  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:17.035241  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:17.233094  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:17.236811  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:17.239426  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:17.536372  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:17.732855  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:17.736461  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:17.739790  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:18.034939  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:18.232771  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:18.236891  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:18.239153  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:18.539037  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:18.732640  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:18.737342  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:18.739930  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:19.035197  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:19.205372  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:19.233174  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:19.237079  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:19.239329  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:19.536758  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:19.732364  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:19.737533  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:19.739646  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:20.035231  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:20.233877  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:20.236643  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:20.239190  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:20.535577  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:20.732794  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:20.737580  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:20.739410  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:21.034635  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:21.205548  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:21.233498  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:21.237001  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:21.242370  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:21.535345  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:21.738960  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:21.739132  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:21.740567  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:22.034580  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:22.233246  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:22.237313  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:22.239654  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:22.534730  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:22.732754  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:22.737491  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:22.739705  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:23.034741  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:23.205589  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:23.232251  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:23.236996  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:23.239620  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:23.534511  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:23.733462  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:23.738478  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:23.739710  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:24.034929  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:24.232786  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:24.236679  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:24.239235  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:24.535658  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:24.732461  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:24.736469  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:24.740106  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:25.035212  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:25.232859  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:25.237076  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:25.239390  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:25.535355  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 21:12:25.705478  837622 node_ready.go:57] node "addons-828342" has "Ready":"False" status (will retry)
	I1120 21:12:25.733097  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:25.737696  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:25.739963  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:26.035569  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:26.232410  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:26.237564  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:26.239838  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:26.534847  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:26.732696  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:26.736778  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:26.739260  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:27.035568  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:27.233349  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:27.237471  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:27.239783  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:27.535305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:27.740837  837622 node_ready.go:49] node "addons-828342" is "Ready"
	I1120 21:12:27.740918  837622 node_ready.go:38] duration metric: took 40.038947656s for node "addons-828342" to be "Ready" ...
	I1120 21:12:27.740959  837622 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:12:27.741059  837622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:12:27.751390  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:27.757743  837622 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 21:12:27.757765  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:27.764786  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:27.765205  837622 api_server.go:72] duration metric: took 42.139086459s to wait for apiserver process to appear ...
	I1120 21:12:27.765225  837622 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:12:27.765243  837622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:12:27.804794  837622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:12:27.808448  837622 api_server.go:141] control plane version: v1.34.1
	I1120 21:12:27.808482  837622 api_server.go:131] duration metric: took 43.249439ms to wait for apiserver health ...
	I1120 21:12:27.808491  837622 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:12:27.820770  837622 system_pods.go:59] 19 kube-system pods found
	I1120 21:12:27.820813  837622 system_pods.go:61] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending
	I1120 21:12:27.820829  837622 system_pods.go:61] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending
	I1120 21:12:27.820839  837622 system_pods.go:61] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:27.820845  837622 system_pods.go:61] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending
	I1120 21:12:27.820851  837622 system_pods.go:61] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:27.820855  837622 system_pods.go:61] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:27.820864  837622 system_pods.go:61] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:27.820869  837622 system_pods.go:61] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:27.820877  837622 system_pods.go:61] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending
	I1120 21:12:27.820882  837622 system_pods.go:61] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:27.820893  837622 system_pods.go:61] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:27.820897  837622 system_pods.go:61] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending
	I1120 21:12:27.820904  837622 system_pods.go:61] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending
	I1120 21:12:27.820914  837622 system_pods.go:61] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:27.820919  837622 system_pods.go:61] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending
	I1120 21:12:27.820924  837622 system_pods.go:61] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending
	I1120 21:12:27.820932  837622 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:27.820942  837622 system_pods.go:61] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending
	I1120 21:12:27.820946  837622 system_pods.go:61] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending
	I1120 21:12:27.820951  837622 system_pods.go:74] duration metric: took 12.454839ms to wait for pod list to return data ...
	I1120 21:12:27.820959  837622 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:12:27.832167  837622 default_sa.go:45] found service account: "default"
	I1120 21:12:27.832196  837622 default_sa.go:55] duration metric: took 11.229405ms for default service account to be created ...
	I1120 21:12:27.832205  837622 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:12:27.852035  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:27.852074  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:12:27.852083  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending
	I1120 21:12:27.852091  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:27.852096  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending
	I1120 21:12:27.852101  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:27.852106  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:27.852110  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:27.852115  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:27.852124  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending
	I1120 21:12:27.852128  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:27.852135  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:27.852140  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending
	I1120 21:12:27.852144  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending
	I1120 21:12:27.852149  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:27.852158  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending
	I1120 21:12:27.852164  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending
	I1120 21:12:27.852171  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:27.852182  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:27.852187  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending
	I1120 21:12:27.852200  837622 retry.go:31] will retry after 245.675374ms: missing components: kube-dns
	I1120 21:12:28.042008  837622 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 21:12:28.042034  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:28.129077  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:28.129115  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:12:28.129123  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending
	I1120 21:12:28.129130  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:28.129134  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending
	I1120 21:12:28.129140  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:28.129145  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:28.129151  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:28.129159  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:28.129165  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 21:12:28.129170  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:28.129177  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:28.129183  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 21:12:28.129194  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending
	I1120 21:12:28.129201  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:28.129205  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending
	I1120 21:12:28.129222  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending
	I1120 21:12:28.129228  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.129235  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.129241  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:12:28.129258  837622 retry.go:31] will retry after 344.736357ms: missing components: kube-dns
	I1120 21:12:28.236398  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:28.243155  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:28.249209  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:28.484942  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:28.484981  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:12:28.484993  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 21:12:28.485009  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:28.485017  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 21:12:28.485027  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:28.485033  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:28.485042  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:28.485047  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:28.485053  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 21:12:28.485064  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:28.485069  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:28.485076  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 21:12:28.485087  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 21:12:28.485094  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:28.485109  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 21:12:28.485118  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 21:12:28.485124  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.485133  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.485141  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:12:28.485161  837622 retry.go:31] will retry after 429.263194ms: missing components: kube-dns
	I1120 21:12:28.582597  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:28.732770  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:28.736810  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:28.740411  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:28.924574  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:28.924612  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:12:28.924622  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 21:12:28.924629  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:28.924635  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 21:12:28.924640  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:28.924647  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:28.924652  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:28.924657  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:28.924663  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 21:12:28.924667  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:28.924671  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:28.924678  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 21:12:28.924684  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 21:12:28.924690  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:28.924706  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 21:12:28.924717  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 21:12:28.924724  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.924731  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:28.924743  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:12:28.924758  837622 retry.go:31] will retry after 390.95466ms: missing components: kube-dns
	I1120 21:12:29.035782  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:29.245140  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:29.245814  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:29.248207  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:29.323099  837622 system_pods.go:86] 19 kube-system pods found
	I1120 21:12:29.323132  837622 system_pods.go:89] "coredns-66bc5c9577-k2xjd" [e921a052-4df1-4508-a858-e14c90ca16b1] Running
	I1120 21:12:29.323142  837622 system_pods.go:89] "csi-hostpath-attacher-0" [40af5bba-19d2-4fd0-a018-a59cbe5b3f1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 21:12:29.323149  837622 system_pods.go:89] "csi-hostpath-resizer-0" [ac9387ef-6ac9-4574-a176-e0b9056c5d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 21:12:29.323156  837622 system_pods.go:89] "csi-hostpathplugin-l4wrc" [c4c7930f-e634-471e-b301-53c3e44ede91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 21:12:29.323161  837622 system_pods.go:89] "etcd-addons-828342" [25992a3a-8718-44bc-a118-ecba17b18ec4] Running
	I1120 21:12:29.323165  837622 system_pods.go:89] "kindnet-mb5xh" [ec4eadcf-ae3e-4fff-8b25-451b591e8503] Running
	I1120 21:12:29.323171  837622 system_pods.go:89] "kube-apiserver-addons-828342" [c451fae4-4867-47d4-a41f-7cd37ab21a15] Running
	I1120 21:12:29.323178  837622 system_pods.go:89] "kube-controller-manager-addons-828342" [da939beb-e7ab-4933-88a0-08f8d4745add] Running
	I1120 21:12:29.323185  837622 system_pods.go:89] "kube-ingress-dns-minikube" [d5b9462e-96a2-4854-973d-7dc6b45f1458] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 21:12:29.323207  837622 system_pods.go:89] "kube-proxy-7p2c4" [ebd799ae-65d8-457e-b684-925b6c33db63] Running
	I1120 21:12:29.323218  837622 system_pods.go:89] "kube-scheduler-addons-828342" [6b30cef2-c462-4801-b34f-04ed0dc721df] Running
	I1120 21:12:29.323224  837622 system_pods.go:89] "metrics-server-85b7d694d7-hwvxs" [aa4b4e26-ab05-42d7-89ad-4c20ed9f5fab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 21:12:29.323231  837622 system_pods.go:89] "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 21:12:29.323241  837622 system_pods.go:89] "registry-6b586f9694-5shs6" [42230274-cb50-4d44-8285-0f2caf2a0323] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 21:12:29.323249  837622 system_pods.go:89] "registry-creds-764b6fb674-6zgsm" [9b28e075-2521-408c-86c7-38c6b7b056b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 21:12:29.323255  837622 system_pods.go:89] "registry-proxy-k8tlb" [060c24e9-2190-44df-b27c-78a133efd64b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 21:12:29.323261  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4sk4t" [06f0eb28-6df7-428a-95f8-7eb183c8cb1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:29.323269  837622 system_pods.go:89] "snapshot-controller-7d9fbc56b8-plxlw" [bce19291-abba-43c1-b4b2-adce64b7177b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 21:12:29.323273  837622 system_pods.go:89] "storage-provisioner" [e76e4b45-6243-4a54-8882-1a069f875052] Running
	I1120 21:12:29.323284  837622 system_pods.go:126] duration metric: took 1.491071478s to wait for k8s-apps to be running ...
	I1120 21:12:29.323295  837622 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:12:29.323355  837622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:12:29.341968  837622 system_svc.go:56] duration metric: took 18.663055ms WaitForService to wait for kubelet
	I1120 21:12:29.341997  837622 kubeadm.go:587] duration metric: took 43.715881455s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:12:29.342016  837622 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:12:29.346088  837622 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:12:29.346125  837622 node_conditions.go:123] node cpu capacity is 2
	I1120 21:12:29.346139  837622 node_conditions.go:105] duration metric: took 4.117031ms to run NodePressure ...
	I1120 21:12:29.346151  837622 start.go:242] waiting for startup goroutines ...
	I1120 21:12:29.537319  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:29.732336  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:29.737421  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:29.740464  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:30.046320  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:30.233170  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:30.237134  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:30.239668  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:30.536438  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:30.733901  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:30.737078  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:30.739119  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:31.035608  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:31.232752  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:31.236456  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:31.240089  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:31.537283  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:31.735821  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:31.747197  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:31.834516  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:32.035721  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:32.233542  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:32.237487  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:32.241621  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:32.536003  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:32.733900  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:32.737222  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:32.739738  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:33.035326  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:33.233368  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:33.238424  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:33.240049  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:33.537539  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:33.732693  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:33.737561  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:33.739545  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:34.036175  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:34.233326  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:34.237018  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:34.239561  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:34.535241  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:34.732657  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:34.736795  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:34.740480  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:35.036433  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:35.233419  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:35.238511  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:35.240296  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:35.536557  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:35.732895  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:35.736905  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:35.739669  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:36.035923  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:36.233136  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:36.238362  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:36.240607  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:36.535752  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:36.733289  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:36.737919  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:36.739906  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:37.035671  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:37.232685  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:37.236753  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:37.239294  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:37.536108  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:37.736924  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:37.739680  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:37.744399  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:38.035930  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:38.233382  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:38.238054  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:38.244291  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:38.538792  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:38.733302  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:38.737560  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:38.739495  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:39.034926  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:39.233247  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:39.237165  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:39.239505  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:39.537182  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:39.733786  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:39.736396  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:39.739990  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:40.046430  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:40.233295  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:40.237345  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:40.240190  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:40.535668  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:40.733123  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:40.737942  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:40.740247  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:41.035710  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:41.232454  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:41.238251  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:41.239235  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:41.537045  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:41.733375  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:41.739135  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:41.740485  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:42.036093  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:42.233833  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:42.238765  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:42.241162  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:42.537878  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:42.733743  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:42.737793  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:42.739497  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:43.035239  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:43.233351  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:43.237101  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:43.239106  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:43.544807  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:43.733244  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:43.738635  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:43.740808  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:44.036568  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:44.232835  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:44.236896  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:44.240229  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 21:12:44.537308  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:44.737158  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:44.744747  837622 kapi.go:107] duration metric: took 52.508202833s to wait for kubernetes.io/minikube-addons=registry ...
	I1120 21:12:44.745294  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:45.046351  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:45.239434  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:45.239930  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:45.536176  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:45.732297  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:45.738288  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:46.036422  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:46.232794  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:46.236680  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:46.535485  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:46.733729  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:46.736724  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:47.038654  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:47.233393  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:47.237718  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:47.541712  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:47.733382  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:47.737711  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:48.035899  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:48.233114  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:48.236951  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:48.535278  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:48.733515  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:48.737328  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:49.036153  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:49.233247  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:49.238101  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:49.535945  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:49.733566  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:49.736988  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:50.037294  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:50.241987  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:50.242324  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:50.535854  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:50.734925  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:50.746380  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:51.036852  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:51.233459  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:51.237535  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:51.536198  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:51.733211  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:51.742637  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:52.036362  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:52.233631  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:52.237216  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:52.537152  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:52.732544  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:52.753418  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:53.037021  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:53.233520  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:53.237883  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:53.535128  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:53.733232  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:53.738108  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:54.036639  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:54.233317  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:54.237382  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:54.537398  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:54.733825  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:54.738098  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:55.038273  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:55.237177  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:55.239130  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:55.538087  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:55.736625  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:55.738544  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:56.036580  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:56.232957  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:56.237361  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:56.541300  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:56.733953  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:56.736785  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:57.036078  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:57.233128  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:57.236914  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:57.535592  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:57.733407  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:57.738235  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:58.036037  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:58.233954  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:58.236735  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:58.535736  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:58.735296  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:58.736996  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:59.035248  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:59.233103  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:59.236993  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:12:59.536234  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:12:59.733903  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:12:59.737457  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:00.084586  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:00.239689  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:00.239824  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:00.536441  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:00.733148  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:00.737466  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:01.035820  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:01.233452  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:01.238473  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:01.536216  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:01.733589  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:01.736546  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:02.036663  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:02.233116  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:02.237842  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:02.535915  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:02.732135  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:02.736953  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:03.035954  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:03.232930  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:03.236858  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:03.536194  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:03.733392  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:03.737262  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:04.035944  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:04.233285  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:04.237470  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:04.536807  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:04.734228  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:04.736721  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:05.036292  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:05.232676  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:05.236343  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:05.536240  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:05.733106  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:05.736752  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:06.036120  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:06.232569  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:06.237531  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:06.536429  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:06.732407  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:06.737628  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:07.035427  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:07.232293  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:07.239843  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:07.535400  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:07.733189  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:07.736807  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:08.035418  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:08.232431  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:08.237366  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:08.535932  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:08.733073  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:08.737219  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:09.035989  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:09.233173  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:09.237411  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:09.535851  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:09.732794  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:09.737304  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:10.038932  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:10.234766  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:10.238143  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:10.547209  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:10.739389  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:10.739860  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:11.036525  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:11.234181  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:11.236534  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:11.539611  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:11.732778  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:11.736540  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:12.036360  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:12.232276  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:12.237443  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:12.534870  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:12.737628  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:12.739449  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:13.036858  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:13.233113  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:13.237011  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:13.536206  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:13.732591  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:13.737039  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:14.039251  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:14.232684  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:14.237021  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:14.535474  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:14.733611  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:14.737325  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:15.038896  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:15.233357  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:15.237174  837622 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 21:13:15.548555  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:15.743700  837622 kapi.go:107] duration metric: took 1m23.510188589s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1120 21:13:15.743838  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:16.036937  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:16.233369  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:16.536294  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:16.732586  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:17.100889  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:17.233095  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:17.535704  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:17.733043  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:18.037599  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:18.233052  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:18.536420  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:18.732715  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:19.036913  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:19.234855  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:19.560113  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:19.747119  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:20.036416  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:20.233583  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:20.540993  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:20.735951  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:21.037567  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:21.232393  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:21.535523  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:21.733112  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:22.036125  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:22.234377  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:22.536280  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:22.732796  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:23.042015  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:23.233160  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:23.536248  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:23.733218  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:24.045653  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:24.233532  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:24.536771  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:24.733088  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:25.038354  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:25.233023  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:25.536040  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 21:13:25.741667  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:26.035653  837622 kapi.go:107] duration metric: took 1m33.503922107s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1120 21:13:26.232820  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:26.732695  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:27.233058  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:27.732789  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:28.233183  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:28.732305  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:29.232661  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:29.733241  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:30.233629  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:30.733232  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:31.232719  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:31.733060  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:32.232745  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:32.733790  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:33.233909  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:33.733593  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:34.232764  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:34.741280  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:35.233616  837622 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 21:13:35.733478  837622 kapi.go:107] duration metric: took 1m40.004071116s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1120 21:13:35.734600  837622 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-828342 cluster.
	I1120 21:13:35.735701  837622 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1120 21:13:35.736834  837622 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1120 21:13:35.738046  837622 out.go:179] * Enabled addons: cloud-spanner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1120 21:13:35.740070  837622 addons.go:515] duration metric: took 1m50.113555557s for enable addons: enabled=[cloud-spanner registry-creds nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget ingress-dns storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1120 21:13:35.740112  837622 start.go:247] waiting for cluster config update ...
	I1120 21:13:35.740133  837622 start.go:256] writing updated cluster config ...
	I1120 21:13:35.740413  837622 ssh_runner.go:195] Run: rm -f paused
	I1120 21:13:35.745864  837622 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:13:35.766366  837622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k2xjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.774029  837622 pod_ready.go:94] pod "coredns-66bc5c9577-k2xjd" is "Ready"
	I1120 21:13:35.774112  837622 pod_ready.go:86] duration metric: took 7.721194ms for pod "coredns-66bc5c9577-k2xjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.861930  837622 pod_ready.go:83] waiting for pod "etcd-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.868429  837622 pod_ready.go:94] pod "etcd-addons-828342" is "Ready"
	I1120 21:13:35.868454  837622 pod_ready.go:86] duration metric: took 6.496629ms for pod "etcd-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.870912  837622 pod_ready.go:83] waiting for pod "kube-apiserver-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.877730  837622 pod_ready.go:94] pod "kube-apiserver-addons-828342" is "Ready"
	I1120 21:13:35.877756  837622 pod_ready.go:86] duration metric: took 6.815164ms for pod "kube-apiserver-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:35.884045  837622 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:36.150385  837622 pod_ready.go:94] pod "kube-controller-manager-addons-828342" is "Ready"
	I1120 21:13:36.150411  837622 pod_ready.go:86] duration metric: took 266.339003ms for pod "kube-controller-manager-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:36.350021  837622 pod_ready.go:83] waiting for pod "kube-proxy-7p2c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:36.750235  837622 pod_ready.go:94] pod "kube-proxy-7p2c4" is "Ready"
	I1120 21:13:36.750265  837622 pod_ready.go:86] duration metric: took 400.215258ms for pod "kube-proxy-7p2c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:36.950410  837622 pod_ready.go:83] waiting for pod "kube-scheduler-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:37.349905  837622 pod_ready.go:94] pod "kube-scheduler-addons-828342" is "Ready"
	I1120 21:13:37.349940  837622 pod_ready.go:86] duration metric: took 399.504603ms for pod "kube-scheduler-addons-828342" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:13:37.349953  837622 pod_ready.go:40] duration metric: took 1.604057387s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:13:37.410721  837622 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 21:13:37.412323  837622 out.go:179] * Done! kubectl is now configured to use "addons-828342" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 21:14:17 addons-828342 crio[826]: time="2025-11-20T21:14:17.929505926Z" level=info msg="Removed container fe4dc29c717212504b515ee1ed70266d3e2b8f411dd9e1fcd012680b51f4ca23: default/task-pv-pod/task-pv-container" id=f02c405d-b1f9-4137-a864-990ca5b2b78d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:14:18 addons-828342 crio[826]: time="2025-11-20T21:14:18.919550146Z" level=info msg="Stopping pod sandbox: 47d1f7effd2231f98046ea546351ef173a490a2c126446a1947e57aa04e6e67b" id=614c452e-111f-4d4b-80e0-b3ec92c91a30 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 20 21:14:18 addons-828342 crio[826]: time="2025-11-20T21:14:18.91981474Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:47d1f7effd2231f98046ea546351ef173a490a2c126446a1947e57aa04e6e67b UID:f535306f-808c-46c2-b0f0-59c964602b6f NetNS:/var/run/netns/32a5ff90-d857-4f8d-8387-e70c9bd55de1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400219cc40}] Aliases:map[]}"
	Nov 20 21:14:18 addons-828342 crio[826]: time="2025-11-20T21:14:18.919956747Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 20 21:14:18 addons-828342 crio[826]: time="2025-11-20T21:14:18.945120986Z" level=info msg="Stopped pod sandbox: 47d1f7effd2231f98046ea546351ef173a490a2c126446a1947e57aa04e6e67b" id=614c452e-111f-4d4b-80e0-b3ec92c91a30 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.181726047Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e/POD" id=8806c856-e8ae-4dec-bcc4-1acaa1f2aa3d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.181796415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.201699855Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e Namespace:local-path-storage ID:350bcaf6f39efec3a241d1fd5d44b69074a111ff5bc5bb96b12f72e056f1a2ed UID:cc6b0489-c669-42ad-8792-36d3d7511d4b NetNS:/var/run/netns/13ef9883-8cec-43a1-89d0-91a47d0acf1f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40027513f0}] Aliases:map[]}"
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.2047794Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e to CNI network \"kindnet\" (type=ptp)"
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.221028206Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e Namespace:local-path-storage ID:350bcaf6f39efec3a241d1fd5d44b69074a111ff5bc5bb96b12f72e056f1a2ed UID:cc6b0489-c669-42ad-8792-36d3d7511d4b NetNS:/var/run/netns/13ef9883-8cec-43a1-89d0-91a47d0acf1f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40027513f0}] Aliases:map[]}"
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.221444145Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e for CNI network kindnet (type=ptp)"
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.226306744Z" level=info msg="Ran pod sandbox 350bcaf6f39efec3a241d1fd5d44b69074a111ff5bc5bb96b12f72e056f1a2ed with infra container: local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e/POD" id=8806c856-e8ae-4dec-bcc4-1acaa1f2aa3d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.22770782Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=03cd79ae-d2dd-4769-b4ea-015eff66ae2e name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.231653985Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=599d325c-d1f5-4f10-94dc-721464585a7d name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.242270115Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e/helper-pod" id=34654d0f-5173-44c8-9edc-91e9b3f439c6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.242378851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.257481157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.258179364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.276790103Z" level=info msg="Created container 05ebba8c5d4620b653947a4c63e8edccddef69e117be207a0355bbc5579126ea: local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e/helper-pod" id=34654d0f-5173-44c8-9edc-91e9b3f439c6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.277926707Z" level=info msg="Starting container: 05ebba8c5d4620b653947a4c63e8edccddef69e117be207a0355bbc5579126ea" id=fe198869-0d6e-44a6-9d97-4a2d5aa781f1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:14:20 addons-828342 crio[826]: time="2025-11-20T21:14:20.282825458Z" level=info msg="Started container" PID=5648 containerID=05ebba8c5d4620b653947a4c63e8edccddef69e117be207a0355bbc5579126ea description=local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e/helper-pod id=fe198869-0d6e-44a6-9d97-4a2d5aa781f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=350bcaf6f39efec3a241d1fd5d44b69074a111ff5bc5bb96b12f72e056f1a2ed
	Nov 20 21:14:21 addons-828342 crio[826]: time="2025-11-20T21:14:21.945354853Z" level=info msg="Stopping pod sandbox: 350bcaf6f39efec3a241d1fd5d44b69074a111ff5bc5bb96b12f72e056f1a2ed" id=8602da3c-a54e-45cf-93bb-1c18acf74827 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 20 21:14:21 addons-828342 crio[826]: time="2025-11-20T21:14:21.945648518Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e Namespace:local-path-storage ID:350bcaf6f39efec3a241d1fd5d44b69074a111ff5bc5bb96b12f72e056f1a2ed UID:cc6b0489-c669-42ad-8792-36d3d7511d4b NetNS:/var/run/netns/13ef9883-8cec-43a1-89d0-91a47d0acf1f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400219c3e0}] Aliases:map[]}"
	Nov 20 21:14:21 addons-828342 crio[826]: time="2025-11-20T21:14:21.94579275Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e from CNI network \"kindnet\" (type=ptp)"
	Nov 20 21:14:21 addons-828342 crio[826]: time="2025-11-20T21:14:21.969181163Z" level=info msg="Stopped pod sandbox: 350bcaf6f39efec3a241d1fd5d44b69074a111ff5bc5bb96b12f72e056f1a2ed" id=8602da3c-a54e-45cf-93bb-1c18acf74827 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	05ebba8c5d462       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             7 seconds ago        Exited              helper-pod                               0                   350bcaf6f39ef       helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e   local-path-storage
	1f5274ff49a45       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            10 seconds ago       Exited              busybox                                  0                   47d1f7effd223       test-local-path                                              default
	4feba7a4483ff       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            14 seconds ago       Exited              helper-pod                               0                   46ad808cfd1a6       helper-pod-create-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e   local-path-storage
	0699f45d1c308       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          25 seconds ago       Exited              registry-test                            0                   c85a4f2e80775       registry-test                                                default
	b0a1af515e664       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          47 seconds ago       Running             busybox                                  0                   a9763d198a84a       busybox                                                      default
	f89684ba3974d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 52 seconds ago       Running             gcp-auth                                 0                   80baf843833cb       gcp-auth-78565c9fb4-xchxl                                    gcp-auth
	048a91057c75b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          About a minute ago   Running             csi-snapshotter                          0                   9f654e327c852       csi-hostpathplugin-l4wrc                                     kube-system
	638a6b27e3a29       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             About a minute ago   Exited              patch                                    2                   eb7cb198fa82c       gcp-auth-certs-patch-wlnfh                                   gcp-auth
	e1b29a88eeca4       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          About a minute ago   Running             csi-provisioner                          0                   9f654e327c852       csi-hostpathplugin-l4wrc                                     kube-system
	4cf3d3324d8e7       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   9f654e327c852       csi-hostpathplugin-l4wrc                                     kube-system
	95aebe3ee5042       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   9f654e327c852       csi-hostpathplugin-l4wrc                                     kube-system
	22af9833d5a05       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            About a minute ago   Running             gadget                                   0                   ffc3ec4e6c29a       gadget-rkcm9                                                 gadget
	327a93daa8d9b       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             About a minute ago   Running             controller                               0                   998d839f19021       ingress-nginx-controller-6c8bf45fb-7xbwt                     ingress-nginx
	e0b907ada2744       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   9f654e327c852       csi-hostpathplugin-l4wrc                                     kube-system
	330fe3e722047       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   026c561e7e8d5       gcp-auth-certs-create-mrgpg                                  gcp-auth
	587113023c460       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   fcaad6f163d0d       yakd-dashboard-5ff678cb9-788wg                               yakd-dashboard
	d877d3a1d3b44       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   9419e4e2c7682       nvidia-device-plugin-daemonset-sh7sx                         kube-system
	30158179e15c3       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   ddef337b0bbed       snapshot-controller-7d9fbc56b8-plxlw                         kube-system
	070e65f471ee1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   bdc2d78ba839e       local-path-provisioner-648f6765c9-zsvx2                      local-path-storage
	a93f40eb30f48       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   839ef8206254a       csi-hostpath-attacher-0                                      kube-system
	c5c88ac4e46db       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   0f216ced2014f       csi-hostpath-resizer-0                                       kube-system
	f5429fe8d6eae       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   94208d8aa2550       metrics-server-85b7d694d7-hwvxs                              kube-system
	12065726cc690       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   8e5a5c82ae9c4       kube-ingress-dns-minikube                                    kube-system
	1c5f45287ca2f       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             About a minute ago   Exited              patch                                    2                   0c35fb6c28126       ingress-nginx-admission-patch-n279x                          ingress-nginx
	1c684f5b792d7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   89e529b563aad       registry-proxy-k8tlb                                         kube-system
	284630d028c28       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   9f654e327c852       csi-hostpathplugin-l4wrc                                     kube-system
	27420be397785       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   e1a6307be0e94       ingress-nginx-admission-create-jxltn                         ingress-nginx
	cbe1df1a85fe8       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   81b056c07e3d4       cloud-spanner-emulator-6f9fcf858b-2p6j9                      default
	58a00a031d21a       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   01a887d41e1f7       registry-6b586f9694-5shs6                                    kube-system
	a5870aba6804f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   fc24cb66dc37d       snapshot-controller-7d9fbc56b8-4sk4t                         kube-system
	4dfccd2918ac5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   fd25e873c5cde       coredns-66bc5c9577-k2xjd                                     kube-system
	c82f61a3038fc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   eef098cdf4227       storage-provisioner                                          kube-system
	20980cdb4eaaa       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   118f0190d9296       kube-proxy-7p2c4                                             kube-system
	6896f41cbd9c3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   463b35f90d92e       kindnet-mb5xh                                                kube-system
	159ee609cc9eb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   01a3a920af154       kube-controller-manager-addons-828342                        kube-system
	5e20cd420abae       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   9b93659d335bb       kube-apiserver-addons-828342                                 kube-system
	1f333dfa546bf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   4b80006fc77d4       etcd-addons-828342                                           kube-system
	303e566caaff9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   a6835c6176352       kube-scheduler-addons-828342                                 kube-system
	
	
	==> coredns [4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396] <==
	[INFO] 10.244.0.8:39998 - 25459 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.007050638s
	[INFO] 10.244.0.8:39998 - 46314 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000287134s
	[INFO] 10.244.0.8:39998 - 7518 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000321932s
	[INFO] 10.244.0.8:44384 - 11382 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148244s
	[INFO] 10.244.0.8:44384 - 11644 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000297874s
	[INFO] 10.244.0.8:57166 - 2988 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112354s
	[INFO] 10.244.0.8:57166 - 2759 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101827s
	[INFO] 10.244.0.8:48136 - 28771 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100776s
	[INFO] 10.244.0.8:48136 - 28599 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105125s
	[INFO] 10.244.0.8:45202 - 45913 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006390904s
	[INFO] 10.244.0.8:45202 - 46085 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006993825s
	[INFO] 10.244.0.8:60718 - 1787 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145421s
	[INFO] 10.244.0.8:60718 - 1375 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019512s
	[INFO] 10.244.0.21:40334 - 18585 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000198468s
	[INFO] 10.244.0.21:52255 - 59226 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000085975s
	[INFO] 10.244.0.21:53134 - 60261 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000150131s
	[INFO] 10.244.0.21:41377 - 31984 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081109s
	[INFO] 10.244.0.21:60288 - 42355 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102787s
	[INFO] 10.244.0.21:33125 - 21001 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101811s
	[INFO] 10.244.0.21:41499 - 36129 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001549426s
	[INFO] 10.244.0.21:55809 - 31914 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002183608s
	[INFO] 10.244.0.21:57791 - 57729 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000852778s
	[INFO] 10.244.0.21:46837 - 17090 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002060414s
	[INFO] 10.244.0.23:44196 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000234998s
	[INFO] 10.244.0.23:53941 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165418s
	
	
	==> describe nodes <==
	Name:               addons-828342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-828342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=addons-828342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_11_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-828342
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-828342"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:11:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-828342
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:14:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:14:13 +0000   Thu, 20 Nov 2025 21:11:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:14:13 +0000   Thu, 20 Nov 2025 21:11:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:14:13 +0000   Thu, 20 Nov 2025 21:11:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:14:13 +0000   Thu, 20 Nov 2025 21:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-828342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                08ccc810-1ae7-451c-8f54-003da7828560
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  default                     cloud-spanner-emulator-6f9fcf858b-2p6j9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  gadget                      gadget-rkcm9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  gcp-auth                    gcp-auth-78565c9fb4-xchxl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-7xbwt    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m36s
	  kube-system                 coredns-66bc5c9577-k2xjd                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m42s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 csi-hostpathplugin-l4wrc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 etcd-addons-828342                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m48s
	  kube-system                 kindnet-mb5xh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m42s
	  kube-system                 kube-apiserver-addons-828342                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 kube-controller-manager-addons-828342       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-proxy-7p2c4                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  kube-system                 kube-scheduler-addons-828342                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 metrics-server-85b7d694d7-hwvxs             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m38s
	  kube-system                 nvidia-device-plugin-daemonset-sh7sx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 registry-6b586f9694-5shs6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 registry-creds-764b6fb674-6zgsm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 registry-proxy-k8tlb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-4sk4t        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-plxlw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  local-path-storage          local-path-provisioner-648f6765c9-zsvx2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-788wg              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m41s                  kube-proxy       
	  Normal   Starting                 2m54s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m54s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m54s (x8 over 2m54s)  kubelet          Node addons-828342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m54s (x8 over 2m54s)  kubelet          Node addons-828342 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m54s (x8 over 2m54s)  kubelet          Node addons-828342 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m47s                  kubelet          Node addons-828342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s                  kubelet          Node addons-828342 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s                  kubelet          Node addons-828342 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m43s                  node-controller  Node addons-828342 event: Registered Node addons-828342 in Controller
	  Normal   NodeReady                2m1s                   kubelet          Node addons-828342 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 19:42] overlayfs: idmapped layers are currently not supported
	[Nov20 19:43] overlayfs: idmapped layers are currently not supported
	[Nov20 19:44] overlayfs: idmapped layers are currently not supported
	[ +10.941558] overlayfs: idmapped layers are currently not supported
	[Nov20 19:45] overlayfs: idmapped layers are currently not supported
	[ +39.954456] overlayfs: idmapped layers are currently not supported
	[Nov20 19:46] overlayfs: idmapped layers are currently not supported
	[Nov20 19:48] overlayfs: idmapped layers are currently not supported
	[ +15.306261] overlayfs: idmapped layers are currently not supported
	[Nov20 19:49] overlayfs: idmapped layers are currently not supported
	[Nov20 19:50] overlayfs: idmapped layers are currently not supported
	[Nov20 19:51] overlayfs: idmapped layers are currently not supported
	[ +26.087379] overlayfs: idmapped layers are currently not supported
	[Nov20 19:52] overlayfs: idmapped layers are currently not supported
	[Nov20 19:53] overlayfs: idmapped layers are currently not supported
	[  +2.035111] overlayfs: idmapped layers are currently not supported
	[Nov20 19:54] overlayfs: idmapped layers are currently not supported
	[Nov20 19:55] overlayfs: idmapped layers are currently not supported
	[Nov20 19:56] overlayfs: idmapped layers are currently not supported
	[Nov20 19:57] overlayfs: idmapped layers are currently not supported
	[Nov20 19:58] overlayfs: idmapped layers are currently not supported
	[Nov20 19:59] overlayfs: idmapped layers are currently not supported
	[Nov20 20:04] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:08] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190] <==
	{"level":"warn","ts":"2025-11-20T21:11:36.825494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.842649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.859863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.879083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.900849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.912102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.936368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.954583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.968160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:36.987986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.007936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.023563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.044458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.063560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.081292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.119916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.181618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.207597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:37.327877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:52.767200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:52.776215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:12:15.393794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:12:15.399777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:12:15.419394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:12:15.439458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44152","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f89684ba3974d676a7ff46109b4785b9ba18555ea18bcbbed271715c2ca6d641] <==
	2025/11/20 21:13:35 GCP Auth Webhook started!
	2025/11/20 21:13:38 Ready to marshal response ...
	2025/11/20 21:13:38 Ready to write response ...
	2025/11/20 21:13:38 Ready to marshal response ...
	2025/11/20 21:13:38 Ready to write response ...
	2025/11/20 21:13:38 Ready to marshal response ...
	2025/11/20 21:13:38 Ready to write response ...
	2025/11/20 21:14:00 Ready to marshal response ...
	2025/11/20 21:14:00 Ready to write response ...
	2025/11/20 21:14:06 Ready to marshal response ...
	2025/11/20 21:14:06 Ready to write response ...
	2025/11/20 21:14:12 Ready to marshal response ...
	2025/11/20 21:14:12 Ready to write response ...
	2025/11/20 21:14:12 Ready to marshal response ...
	2025/11/20 21:14:12 Ready to write response ...
	2025/11/20 21:14:19 Ready to marshal response ...
	2025/11/20 21:14:19 Ready to write response ...
	
	
	==> kernel <==
	 21:14:28 up  3:56,  0 user,  load average: 1.46, 2.50, 3.22
	Linux addons-828342 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c] <==
	I1120 21:12:27.311126       1 main.go:301] handling current node
	I1120 21:12:37.305995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:12:37.306048       1 main.go:301] handling current node
	I1120 21:12:47.307254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:12:47.307360       1 main.go:301] handling current node
	I1120 21:12:57.305191       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:12:57.305246       1 main.go:301] handling current node
	I1120 21:13:07.305702       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:13:07.305733       1 main.go:301] handling current node
	I1120 21:13:17.306067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:13:17.306100       1 main.go:301] handling current node
	I1120 21:13:27.305474       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:13:27.305510       1 main.go:301] handling current node
	I1120 21:13:37.305423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:13:37.305607       1 main.go:301] handling current node
	I1120 21:13:47.305352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:13:47.305633       1 main.go:301] handling current node
	I1120 21:13:57.305760       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:13:57.305809       1 main.go:301] handling current node
	I1120 21:14:07.305620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:14:07.305668       1 main.go:301] handling current node
	I1120 21:14:17.306080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:14:17.306203       1 main.go:301] handling current node
	I1120 21:14:27.305710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:14:27.305830       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1120 21:13:05.505730       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.42.180:443: connect: connection refused" logger="UnhandledError"
	E1120 21:13:05.511250       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.42.180:443: connect: connection refused" logger="UnhandledError"
	W1120 21:13:06.506135       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 21:13:06.506180       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1120 21:13:06.506194       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1120 21:13:06.506241       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 21:13:06.506273       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 21:13:06.507383       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1120 21:13:10.525424       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.42.180:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1120 21:13:10.525869       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 21:13:10.525909       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 21:13:10.654477       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1120 21:13:10.685769       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1120 21:13:47.704333       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51228: use of closed network connection
	E1120 21:13:47.944384       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51258: use of closed network connection
	I1120 21:14:16.207999       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88] <==
	I1120 21:11:45.360744       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 21:11:45.360876       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 21:11:45.364119       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:11:45.364178       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:11:45.364386       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:11:45.364527       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:11:45.368464       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:45.372662       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:11:45.372903       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 21:11:45.390966       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 21:11:45.391078       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:45.404604       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:11:45.404632       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:11:45.404660       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1120 21:11:50.959974       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1120 21:12:15.368900       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 21:12:15.372884       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 21:12:15.373062       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1120 21:12:15.373122       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1120 21:12:15.375135       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1120 21:12:15.474152       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:12:15.476380       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:12:30.310953       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1120 21:12:45.483849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 21:12:45.492300       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07] <==
	I1120 21:11:47.133676       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:11:47.224847       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:11:47.325598       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:11:47.325628       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 21:11:47.325690       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:11:47.355260       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:11:47.355314       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:11:47.365119       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:11:47.365449       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:11:47.365466       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:11:47.368449       1 config.go:200] "Starting service config controller"
	I1120 21:11:47.368463       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:11:47.368480       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:11:47.368484       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:11:47.368495       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:11:47.368499       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:11:47.369165       1 config.go:309] "Starting node config controller"
	I1120 21:11:47.369173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:11:47.369179       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:11:47.468619       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:11:47.468653       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:11:47.468684       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3] <==
	E1120 21:11:38.349432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:11:38.349603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:11:38.349678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:11:38.349746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:11:38.349801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:11:38.349845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:11:38.349891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:11:38.359440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 21:11:38.362916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:11:38.363018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:11:38.363071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:11:38.363195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:11:38.363247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:11:38.363717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:11:39.214600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:11:39.332551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:11:39.371981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:11:39.488544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:11:39.488688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:11:39.508255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:11:39.566600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 21:11:39.575111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:11:39.596665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:11:39.612618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1120 21:11:42.728121       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:14:19 addons-828342 kubelet[1277]: I1120 21:14:19.165772    1277 reconciler_common.go:299] "Volume detached for volume \"pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e\" (UniqueName: \"kubernetes.io/host-path/f535306f-808c-46c2-b0f0-59c964602b6f-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e\") on node \"addons-828342\" DevicePath \"\""
	Nov 20 21:14:19 addons-828342 kubelet[1277]: I1120 21:14:19.934300    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47d1f7effd2231f98046ea546351ef173a490a2c126446a1947e57aa04e6e67b"
	Nov 20 21:14:19 addons-828342 kubelet[1277]: E1120 21:14:19.936336    1277 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-828342\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-828342' and this object" podUID="f535306f-808c-46c2-b0f0-59c964602b6f" pod="default/test-local-path"
	Nov 20 21:14:19 addons-828342 kubelet[1277]: I1120 21:14:19.971455    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbbq\" (UniqueName: \"kubernetes.io/projected/cc6b0489-c669-42ad-8792-36d3d7511d4b-kube-api-access-2dbbq\") pod \"helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e\" (UID: \"cc6b0489-c669-42ad-8792-36d3d7511d4b\") " pod="local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e"
	Nov 20 21:14:19 addons-828342 kubelet[1277]: I1120 21:14:19.971524    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/cc6b0489-c669-42ad-8792-36d3d7511d4b-script\") pod \"helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e\" (UID: \"cc6b0489-c669-42ad-8792-36d3d7511d4b\") " pod="local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e"
	Nov 20 21:14:19 addons-828342 kubelet[1277]: I1120 21:14:19.971583    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/cc6b0489-c669-42ad-8792-36d3d7511d4b-data\") pod \"helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e\" (UID: \"cc6b0489-c669-42ad-8792-36d3d7511d4b\") " pod="local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e"
	Nov 20 21:14:19 addons-828342 kubelet[1277]: I1120 21:14:19.971607    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cc6b0489-c669-42ad-8792-36d3d7511d4b-gcp-creds\") pod \"helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e\" (UID: \"cc6b0489-c669-42ad-8792-36d3d7511d4b\") " pod="local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e"
	Nov 20 21:14:20 addons-828342 kubelet[1277]: E1120 21:14:20.960701    1277 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-828342\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-828342' and this object" podUID="f535306f-808c-46c2-b0f0-59c964602b6f" pod="default/test-local-path"
	Nov 20 21:14:20 addons-828342 kubelet[1277]: I1120 21:14:20.963761    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f535306f-808c-46c2-b0f0-59c964602b6f" path="/var/lib/kubelet/pods/f535306f-808c-46c2-b0f0-59c964602b6f/volumes"
	Nov 20 21:14:20 addons-828342 kubelet[1277]: E1120 21:14:20.964305    1277 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-828342\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-828342' and this object" podUID="f535306f-808c-46c2-b0f0-59c964602b6f" pod="default/test-local-path"
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.086741    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/cc6b0489-c669-42ad-8792-36d3d7511d4b-script\") pod \"cc6b0489-c669-42ad-8792-36d3d7511d4b\" (UID: \"cc6b0489-c669-42ad-8792-36d3d7511d4b\") "
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.087413    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cc6b0489-c669-42ad-8792-36d3d7511d4b-gcp-creds\") pod \"cc6b0489-c669-42ad-8792-36d3d7511d4b\" (UID: \"cc6b0489-c669-42ad-8792-36d3d7511d4b\") "
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.087470    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dbbq\" (UniqueName: \"kubernetes.io/projected/cc6b0489-c669-42ad-8792-36d3d7511d4b-kube-api-access-2dbbq\") pod \"cc6b0489-c669-42ad-8792-36d3d7511d4b\" (UID: \"cc6b0489-c669-42ad-8792-36d3d7511d4b\") "
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.087490    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/cc6b0489-c669-42ad-8792-36d3d7511d4b-data\") pod \"cc6b0489-c669-42ad-8792-36d3d7511d4b\" (UID: \"cc6b0489-c669-42ad-8792-36d3d7511d4b\") "
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.087257    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6b0489-c669-42ad-8792-36d3d7511d4b-script" (OuterVolumeSpecName: "script") pod "cc6b0489-c669-42ad-8792-36d3d7511d4b" (UID: "cc6b0489-c669-42ad-8792-36d3d7511d4b"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.087626    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc6b0489-c669-42ad-8792-36d3d7511d4b-data" (OuterVolumeSpecName: "data") pod "cc6b0489-c669-42ad-8792-36d3d7511d4b" (UID: "cc6b0489-c669-42ad-8792-36d3d7511d4b"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.087666    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc6b0489-c669-42ad-8792-36d3d7511d4b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "cc6b0489-c669-42ad-8792-36d3d7511d4b" (UID: "cc6b0489-c669-42ad-8792-36d3d7511d4b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.091193    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6b0489-c669-42ad-8792-36d3d7511d4b-kube-api-access-2dbbq" (OuterVolumeSpecName: "kube-api-access-2dbbq") pod "cc6b0489-c669-42ad-8792-36d3d7511d4b" (UID: "cc6b0489-c669-42ad-8792-36d3d7511d4b"). InnerVolumeSpecName "kube-api-access-2dbbq". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.188705    1277 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cc6b0489-c669-42ad-8792-36d3d7511d4b-gcp-creds\") on node \"addons-828342\" DevicePath \"\""
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.188747    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2dbbq\" (UniqueName: \"kubernetes.io/projected/cc6b0489-c669-42ad-8792-36d3d7511d4b-kube-api-access-2dbbq\") on node \"addons-828342\" DevicePath \"\""
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.188762    1277 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/cc6b0489-c669-42ad-8792-36d3d7511d4b-data\") on node \"addons-828342\" DevicePath \"\""
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.188771    1277 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/cc6b0489-c669-42ad-8792-36d3d7511d4b-script\") on node \"addons-828342\" DevicePath \"\""
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.950351    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="350bcaf6f39efec3a241d1fd5d44b69074a111ff5bc5bb96b12f72e056f1a2ed"
	Nov 20 21:14:22 addons-828342 kubelet[1277]: E1120 21:14:22.952432    1277 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e\" is forbidden: User \"system:node:addons-828342\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-828342' and this object" podUID="cc6b0489-c669-42ad-8792-36d3d7511d4b" pod="local-path-storage/helper-pod-delete-pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e"
	Nov 20 21:14:22 addons-828342 kubelet[1277]: I1120 21:14:22.956264    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6b0489-c669-42ad-8792-36d3d7511d4b" path="/var/lib/kubelet/pods/cc6b0489-c669-42ad-8792-36d3d7511d4b/volumes"
	
	
	==> storage-provisioner [c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc] <==
	W1120 21:14:03.194523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:05.197886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:05.205628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:07.209008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:07.214540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:09.219153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:09.229482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:11.232587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:11.237904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:13.240877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:13.245971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:15.249263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:15.256566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:17.260333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:17.267900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:19.272472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:19.280502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:21.284112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:21.291367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:23.294242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:23.298794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:25.302451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:25.307273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:27.310184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:27.317483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-828342 -n addons-828342
helpers_test.go:269: (dbg) Run:  kubectl --context addons-828342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-jxltn ingress-nginx-admission-patch-n279x registry-creds-764b6fb674-6zgsm
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-828342 describe pod ingress-nginx-admission-create-jxltn ingress-nginx-admission-patch-n279x registry-creds-764b6fb674-6zgsm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-828342 describe pod ingress-nginx-admission-create-jxltn ingress-nginx-admission-patch-n279x registry-creds-764b6fb674-6zgsm: exit status 1 (85.088393ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jxltn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-n279x" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-6zgsm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-828342 describe pod ingress-nginx-admission-create-jxltn ingress-nginx-admission-patch-n279x registry-creds-764b6fb674-6zgsm: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable headlamp --alsologtostderr -v=1: exit status 11 (272.198711ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:29.604157  845273 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:29.605017  845273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:29.605037  845273 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:29.605043  845273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:29.605337  845273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:29.605669  845273 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:29.606049  845273 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:29.606071  845273 addons.go:607] checking whether the cluster is paused
	I1120 21:14:29.606176  845273 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:29.606194  845273 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:29.606707  845273 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:29.626475  845273 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:29.626537  845273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:29.649791  845273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:29.749765  845273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:29.749861  845273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:29.786273  845273 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:29.786337  845273 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:29.786357  845273 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:29.786394  845273 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:29.786420  845273 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:29.786440  845273 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:29.786460  845273 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:29.786479  845273 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:29.786507  845273 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:29.786533  845273 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:29.786553  845273 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:29.786572  845273 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:29.786591  845273 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:29.786617  845273 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:29.786639  845273 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:29.786660  845273 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:29.786688  845273 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:29.786721  845273 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:29.786741  845273 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:29.786758  845273 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:29.786780  845273 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:29.786798  845273 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:29.786836  845273 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:29.786861  845273 cri.go:89] found id: ""
	I1120 21:14:29.786945  845273 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:29.803236  845273 out.go:203] 
	W1120 21:14:29.806120  845273 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:29.806147  845273 out.go:285] * 
	* 
	W1120 21:14:29.816170  845273 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:29.819282  845273 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.27s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-2p6j9" [06687541-7454-4640-85e6-22fbc4c3790c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003291049s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (300.583198ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:26.307633  844748 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:26.308383  844748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:26.308420  844748 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:26.308443  844748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:26.308769  844748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:26.309148  844748 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:26.309558  844748 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:26.309594  844748 addons.go:607] checking whether the cluster is paused
	I1120 21:14:26.309735  844748 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:26.309766  844748 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:26.310279  844748 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:26.327292  844748 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:26.327370  844748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:26.379174  844748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:26.489811  844748 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:26.489901  844748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:26.520721  844748 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:26.520753  844748 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:26.520759  844748 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:26.520763  844748 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:26.520767  844748 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:26.520770  844748 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:26.520779  844748 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:26.520782  844748 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:26.520785  844748 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:26.520797  844748 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:26.520802  844748 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:26.520810  844748 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:26.520814  844748 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:26.520817  844748 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:26.520820  844748 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:26.520829  844748 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:26.520836  844748 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:26.520841  844748 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:26.520845  844748 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:26.520848  844748 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:26.520853  844748 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:26.520856  844748 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:26.520860  844748 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:26.520863  844748 cri.go:89] found id: ""
	I1120 21:14:26.520922  844748 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:26.536085  844748 out.go:203] 
	W1120 21:14:26.539065  844748 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:26.539088  844748 out.go:285] * 
	* 
	W1120 21:14:26.547193  844748 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:26.550232  844748 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.31s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-828342 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-828342 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-828342 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f535306f-808c-46c2-b0f0-59c964602b6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f535306f-808c-46c2-b0f0-59c964602b6f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f535306f-808c-46c2-b0f0-59c964602b6f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003769057s
addons_test.go:967: (dbg) Run:  kubectl --context addons-828342 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 ssh "cat /opt/local-path-provisioner/pvc-dbe0946f-6117-40e5-acb9-72d499c7f31e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-828342 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-828342 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (316.050095ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:19.986727  844612 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:19.987587  844612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:19.987623  844612 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:19.987642  844612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:19.987941  844612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:19.988253  844612 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:19.988665  844612 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:19.988701  844612 addons.go:607] checking whether the cluster is paused
	I1120 21:14:19.988850  844612 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:19.988884  844612 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:19.989367  844612 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:20.015844  844612 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:20.015908  844612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:20.036491  844612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:20.155589  844612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:20.155777  844612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:20.200477  844612 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:20.200502  844612 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:20.200508  844612 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:20.200516  844612 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:20.200520  844612 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:20.200523  844612 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:20.200526  844612 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:20.200529  844612 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:20.200532  844612 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:20.200538  844612 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:20.200547  844612 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:20.200551  844612 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:20.200554  844612 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:20.200557  844612 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:20.200560  844612 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:20.200565  844612 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:20.200568  844612 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:20.200572  844612 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:20.200575  844612 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:20.200578  844612 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:20.200582  844612 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:20.200590  844612 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:20.200593  844612 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:20.200596  844612 cri.go:89] found id: ""
	I1120 21:14:20.200651  844612 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:20.220553  844612 out.go:203] 
	W1120 21:14:20.224002  844612 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:20.224050  844612 out.go:285] * 
	* 
	W1120 21:14:20.235980  844612 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:20.239344  844612 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-sh7sx" [6e6f4bdc-8538-4b2f-b02f-7e60b9a70b90] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004273379s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (282.568733ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:11.496345  844239 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:11.497391  844239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:11.497406  844239 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:11.497411  844239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:11.497779  844239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:14:11.498145  844239 mustload.go:66] Loading cluster: addons-828342
	I1120 21:14:11.498578  844239 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:11.498594  844239 addons.go:607] checking whether the cluster is paused
	I1120 21:14:11.498726  844239 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:11.498741  844239 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:14:11.499325  844239 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:14:11.522298  844239 ssh_runner.go:195] Run: systemctl --version
	I1120 21:14:11.522357  844239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:14:11.549928  844239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:14:11.657515  844239 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:14:11.657618  844239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:14:11.686299  844239 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:14:11.686324  844239 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:14:11.686330  844239 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:14:11.686335  844239 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:14:11.686338  844239 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:14:11.686342  844239 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:14:11.686345  844239 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:14:11.686349  844239 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:14:11.686352  844239 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:14:11.686358  844239 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:14:11.686362  844239 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:14:11.686366  844239 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:14:11.686370  844239 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:14:11.686373  844239 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:14:11.686376  844239 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:14:11.686381  844239 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:14:11.686388  844239 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:14:11.686391  844239 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:14:11.686395  844239 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:14:11.686397  844239 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:14:11.686402  844239 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:14:11.686405  844239 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:14:11.686408  844239 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:14:11.686411  844239 cri.go:89] found id: ""
	I1120 21:14:11.686460  844239 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:14:11.701689  844239 out.go:203] 
	W1120 21:14:11.705009  844239 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:14:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:14:11.705037  844239 out.go:285] * 
	* 
	W1120 21:14:11.712841  844239 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:14:11.716026  844239 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-788wg" [27b81838-5954-4f6c-a60a-d150e3f551ab] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.008930568s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-828342 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-828342 addons disable yakd --alsologtostderr -v=1: exit status 11 (251.23966ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:13:54.404755  843757 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:13:54.405604  843757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:13:54.405620  843757 out.go:374] Setting ErrFile to fd 2...
	I1120 21:13:54.405626  843757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:13:54.405915  843757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:13:54.406234  843757 mustload.go:66] Loading cluster: addons-828342
	I1120 21:13:54.406600  843757 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:13:54.406621  843757 addons.go:607] checking whether the cluster is paused
	I1120 21:13:54.406722  843757 config.go:182] Loaded profile config "addons-828342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:13:54.406736  843757 host.go:66] Checking if "addons-828342" exists ...
	I1120 21:13:54.407241  843757 cli_runner.go:164] Run: docker container inspect addons-828342 --format={{.State.Status}}
	I1120 21:13:54.425450  843757 ssh_runner.go:195] Run: systemctl --version
	I1120 21:13:54.425517  843757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-828342
	I1120 21:13:54.444478  843757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/addons-828342/id_rsa Username:docker}
	I1120 21:13:54.545655  843757 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:13:54.545738  843757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:13:54.575756  843757 cri.go:89] found id: "048a91057c75bba31eaa3a03d02ebf8e814a46c4e61e9574164e3b069761c624"
	I1120 21:13:54.575781  843757 cri.go:89] found id: "e1b29a88eeca45788dacbc87a54d70c69780cc8460eb32dfc38d30ed3008aef4"
	I1120 21:13:54.575787  843757 cri.go:89] found id: "4cf3d3324d8e70cb18e3ea1c22a48046b8b0d2026d3060636aba6d38fef0d425"
	I1120 21:13:54.575790  843757 cri.go:89] found id: "95aebe3ee50423f80aa05813261fadff3a476cf06f52c06f19dc8f2da546f870"
	I1120 21:13:54.575793  843757 cri.go:89] found id: "e0b907ada27443d97ab33f67571078b8d88d1824cfcf30d00712eea65cc2c813"
	I1120 21:13:54.575798  843757 cri.go:89] found id: "d877d3a1d3b44f379e3bac07a3cfb11100205a21710f514f3a7b41e330ac0b31"
	I1120 21:13:54.575801  843757 cri.go:89] found id: "30158179e15c3fef38a8687ab6068d300e14369ff97fd882332168e4f43516b4"
	I1120 21:13:54.575807  843757 cri.go:89] found id: "a93f40eb30f48bb0283a551d6307cd08f6d0a40215c5b81463266fc30815e552"
	I1120 21:13:54.575810  843757 cri.go:89] found id: "c5c88ac4e46dba80fb539945151f2312fe050c2f4847eea4e2ce829a444b9ee5"
	I1120 21:13:54.575824  843757 cri.go:89] found id: "f5429fe8d6eae02dce81dafe591ad1f6c4e0459fd4e3d18ab166104c925a389c"
	I1120 21:13:54.575827  843757 cri.go:89] found id: "12065726cc6906f8d604a2c9389ff76e404c3b9043d736e078220985a6f19544"
	I1120 21:13:54.575831  843757 cri.go:89] found id: "1c684f5b792d7d1a3eb2ae1dfc86b66d147703c6a4857eb0c30bfca91b8d3ade"
	I1120 21:13:54.575834  843757 cri.go:89] found id: "284630d028c28dd6f47d624e7c3dbfe6c5f2dc13a50513e9903f2fac21d0870e"
	I1120 21:13:54.575838  843757 cri.go:89] found id: "58a00a031d21a06f230e1f62d991c8a71390415366c18c8f6f251033d021eff4"
	I1120 21:13:54.575841  843757 cri.go:89] found id: "a5870aba6804fb54924ca6b726dacb571a0edfe54cba8a2bd9324945a5404c0d"
	I1120 21:13:54.575847  843757 cri.go:89] found id: "4dfccd2918ac5c46446ac1a16d60f0f32fb4b52429d704bb1d596c507a46e396"
	I1120 21:13:54.575850  843757 cri.go:89] found id: "c82f61a3038fcd2cd0e4d72e415bb87b397a54b5597a62dbcd1a4e64254002bc"
	I1120 21:13:54.575854  843757 cri.go:89] found id: "20980cdb4eaaa10249e37e485f9e2e25e20ed42bbae58652543a346e9ae08b07"
	I1120 21:13:54.575857  843757 cri.go:89] found id: "6896f41cbd9c30f84c869201e16f2ee171f3098ed474e78ebdab103ed93ae13c"
	I1120 21:13:54.575860  843757 cri.go:89] found id: "159ee609cc9eb0b2922863bc869fdd85805fcd7c2a4a07614ec049e8431b9c88"
	I1120 21:13:54.575865  843757 cri.go:89] found id: "5e20cd420abae8e4c1eafc75a9912acb986186345fd76871a250dc8b7258afaa"
	I1120 21:13:54.575868  843757 cri.go:89] found id: "1f333dfa546bf4abbb0c8289a2b560931f75777f53c11aba4825a4bdbe6aa190"
	I1120 21:13:54.575871  843757 cri.go:89] found id: "303e566caaff96da7c7e61c9632c9928327c3b7d4a267559b1735ea6c8bfd5a3"
	I1120 21:13:54.575874  843757 cri.go:89] found id: ""
	I1120 21:13:54.575927  843757 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:13:54.589907  843757 out.go:203] 
	W1120 21:13:54.591200  843757 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:13:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:13:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:13:54.591223  843757 out.go:285] * 
	* 
	W1120 21:13:54.599483  843757 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:13:54.600784  843757 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-828342 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-038709 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-038709 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-4sgsr" [4a9d5b51-5ea3-402c-a6cf-fcc56c7dbb96] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1120 21:23:38.577772  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:24:06.285096  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:28:38.577793  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-038709 -n functional-038709
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-20 21:31:54.09380289 +0000 UTC m=+1343.436657653
functional_test.go:1645: (dbg) Run:  kubectl --context functional-038709 describe po hello-node-connect-7d85dfc575-4sgsr -n default
functional_test.go:1645: (dbg) kubectl --context functional-038709 describe po hello-node-connect-7d85dfc575-4sgsr -n default:
Name:             hello-node-connect-7d85dfc575-4sgsr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-038709/192.168.49.2
Start Time:       Thu, 20 Nov 2025 21:21:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wvh7p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wvh7p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-4sgsr to functional-038709
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-038709 logs hello-node-connect-7d85dfc575-4sgsr -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-038709 logs hello-node-connect-7d85dfc575-4sgsr -n default: exit status 1 (90.984415ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-4sgsr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-038709 logs hello-node-connect-7d85dfc575-4sgsr -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-038709 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-4sgsr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-038709/192.168.49.2
Start Time:       Thu, 20 Nov 2025 21:21:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wvh7p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wvh7p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-4sgsr to functional-038709
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-038709 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-038709 logs -l app=hello-node-connect: exit status 1 (92.183043ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-4sgsr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-038709 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-038709 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.170.5
IPs:                      10.99.170.5
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31416/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-038709
helpers_test.go:243: (dbg) docker inspect functional-038709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c79e190e0e1ffaafda32e2fda9a5e9ddcba33d0c48b35ae16874f2e523ce44",
	        "Created": "2025-11-20T21:18:24.447612539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 852594,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:18:24.518967099Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/49c79e190e0e1ffaafda32e2fda9a5e9ddcba33d0c48b35ae16874f2e523ce44/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c79e190e0e1ffaafda32e2fda9a5e9ddcba33d0c48b35ae16874f2e523ce44/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c79e190e0e1ffaafda32e2fda9a5e9ddcba33d0c48b35ae16874f2e523ce44/hosts",
	        "LogPath": "/var/lib/docker/containers/49c79e190e0e1ffaafda32e2fda9a5e9ddcba33d0c48b35ae16874f2e523ce44/49c79e190e0e1ffaafda32e2fda9a5e9ddcba33d0c48b35ae16874f2e523ce44-json.log",
	        "Name": "/functional-038709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-038709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-038709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c79e190e0e1ffaafda32e2fda9a5e9ddcba33d0c48b35ae16874f2e523ce44",
	                "LowerDir": "/var/lib/docker/overlay2/fdfc8d26d3951b927b13f87e102f168c50c137c214f9e837f810625fb350a1f2-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdfc8d26d3951b927b13f87e102f168c50c137c214f9e837f810625fb350a1f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdfc8d26d3951b927b13f87e102f168c50c137c214f9e837f810625fb350a1f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdfc8d26d3951b927b13f87e102f168c50c137c214f9e837f810625fb350a1f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-038709",
	                "Source": "/var/lib/docker/volumes/functional-038709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-038709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-038709",
	                "name.minikube.sigs.k8s.io": "functional-038709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7509de86068c77c983a75136562ff704a8c121acc05a8ac969de3e58281d01b9",
	            "SandboxKey": "/var/run/docker/netns/7509de86068c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33891"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33889"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33890"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-038709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:7c:d2:ac:db:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "29979a4e07c02fb41bafddc8ab077a115f6915c50ca1515f3359d6a94356091e",
	                    "EndpointID": "f5b30388950927e60d8bc4667dbecf87b73466a80f939bb8c6ac1dcc61e434b9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-038709",
	                        "49c79e190e0e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-038709 -n functional-038709
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-038709 logs -n 25: (1.55936543s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-038709 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ ssh            │ functional-038709 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ ssh            │ functional-038709 ssh -- ls -la /mount-9p                                                                          │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ ssh            │ functional-038709 ssh sudo umount -f /mount-9p                                                                     │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ mount          │ -p functional-038709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1578888124/001:/mount2 --alsologtostderr -v=1 │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ mount          │ -p functional-038709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1578888124/001:/mount1 --alsologtostderr -v=1 │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ mount          │ -p functional-038709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1578888124/001:/mount3 --alsologtostderr -v=1 │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ ssh            │ functional-038709 ssh findmnt -T /mount1                                                                           │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ ssh            │ functional-038709 ssh findmnt -T /mount2                                                                           │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ ssh            │ functional-038709 ssh findmnt -T /mount3                                                                           │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ mount          │ -p functional-038709 --kill=true                                                                                   │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ start          │ -p functional-038709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ start          │ -p functional-038709 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ start          │ -p functional-038709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-038709 --alsologtostderr -v=1                                                     │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ update-context │ functional-038709 update-context --alsologtostderr -v=2                                                            │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ update-context │ functional-038709 update-context --alsologtostderr -v=2                                                            │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ update-context │ functional-038709 update-context --alsologtostderr -v=2                                                            │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ image          │ functional-038709 image ls --format short --alsologtostderr                                                        │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ image          │ functional-038709 image ls --format yaml --alsologtostderr                                                         │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ ssh            │ functional-038709 ssh pgrep buildkitd                                                                              │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ image          │ functional-038709 image build -t localhost/my-image:functional-038709 testdata/build --alsologtostderr             │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ image          │ functional-038709 image ls                                                                                         │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ image          │ functional-038709 image ls --format json --alsologtostderr                                                         │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	│ image          │ functional-038709 image ls --format table --alsologtostderr                                                        │ functional-038709 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:31 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:31:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:31:35.460214  864472 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:31:35.460394  864472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:31:35.460405  864472 out.go:374] Setting ErrFile to fd 2...
	I1120 21:31:35.460411  864472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:31:35.460775  864472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:31:35.461158  864472 out.go:368] Setting JSON to false
	I1120 21:31:35.462037  864472 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15221,"bootTime":1763659075,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:31:35.462107  864472 start.go:143] virtualization:  
	I1120 21:31:35.465133  864472 out.go:179] * [functional-038709] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1120 21:31:35.468077  864472 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:31:35.468208  864472 notify.go:221] Checking for updates...
	I1120 21:31:35.474289  864472 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:31:35.477098  864472 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:31:35.479894  864472 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:31:35.482635  864472 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:31:35.485479  864472 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:31:35.488820  864472 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:31:35.489421  864472 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:31:35.519075  864472 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:31:35.519184  864472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:31:35.585489  864472 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:31:35.575503821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:31:35.585621  864472 docker.go:319] overlay module found
	I1120 21:31:35.590691  864472 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1120 21:31:35.593495  864472 start.go:309] selected driver: docker
	I1120 21:31:35.593516  864472 start.go:930] validating driver "docker" against &{Name:functional-038709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-038709 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:31:35.593673  864472 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:31:35.597001  864472 out.go:203] 
	W1120 21:31:35.599794  864472 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1120 21:31:35.602553  864472 out.go:203] 
	
	
	==> CRI-O <==
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.668942205Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf" id=1e0a74a8-8c45-457d-920c-97dcb3008555 name=/runtime.v1.ImageService/PullImage
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.669527Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=5445fc7e-89ec-427b-9b42-3abc37c00615 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.671377501Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4a6b3844-9aa5-43fb-ae67-9d2901bb2ccd name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.67146487Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=b991bd63-886f-4261-8d43-2fe2e4e1bee6 name=/runtime.v1.ImageService/PullImage
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.677198874Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.679059853Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2cm7x/kubernetes-dashboard" id=88943e50-02c5-4dd4-8a8f-5f869494a8bc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.679787698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.687391461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.687848558Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1a514d940ff795f568d8e4daca24c2995cb51d7d9141994f91c9c0070565baf4/merged/etc/group: no such file or directory"
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.688523873Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.708873477Z" level=info msg="Created container f08d85e29558b17b55dbca31df10c88c6b9482cba6cf5be5b093ccb3f743edf9: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2cm7x/kubernetes-dashboard" id=88943e50-02c5-4dd4-8a8f-5f869494a8bc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.709943751Z" level=info msg="Starting container: f08d85e29558b17b55dbca31df10c88c6b9482cba6cf5be5b093ccb3f743edf9" id=38e30632-193b-4327-830f-98005124994c name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.713315157Z" level=info msg="Started container" PID=7009 containerID=f08d85e29558b17b55dbca31df10c88c6b9482cba6cf5be5b093ccb3f743edf9 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2cm7x/kubernetes-dashboard id=38e30632-193b-4327-830f-98005124994c name=/runtime.v1.RuntimeService/StartContainer sandboxID=77b5886833c38f6af20002f543ded932a49f072df9d440549a9559c289c32aa8
	Nov 20 21:31:41 functional-038709 crio[3766]: time="2025-11-20T21:31:41.943810207Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.120781037Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a" id=b991bd63-886f-4261-8d43-2fe2e4e1bee6 name=/runtime.v1.ImageService/PullImage
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.121478309Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=f8341d35-611a-4e49-a40b-e442bf55ac83 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.125008782Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=0a873feb-f951-47af-88b3-a8f0cfb61664 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.131654149Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kmbr8/dashboard-metrics-scraper" id=ce9f1450-08e8-4f16-bc5b-9ebbf1249237 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.131811878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.137108584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.137325621Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3c341a2a1a5b9d70e65abe2d8756cc645784d423c55626c3dca914b024b7f627/merged/etc/group: no such file or directory"
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.137654175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.152520141Z" level=info msg="Created container bde09e978763d6ea567afbeb04ea785d24a97de1e7c68077ad55a558105d768d: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kmbr8/dashboard-metrics-scraper" id=ce9f1450-08e8-4f16-bc5b-9ebbf1249237 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.15536071Z" level=info msg="Starting container: bde09e978763d6ea567afbeb04ea785d24a97de1e7c68077ad55a558105d768d" id=5e61d2ee-ac32-48d8-b1e9-82083f84f6e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:31:43 functional-038709 crio[3766]: time="2025-11-20T21:31:43.157796777Z" level=info msg="Started container" PID=7051 containerID=bde09e978763d6ea567afbeb04ea785d24a97de1e7c68077ad55a558105d768d description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kmbr8/dashboard-metrics-scraper id=5e61d2ee-ac32-48d8-b1e9-82083f84f6e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7cae6917cefcf2d20113092d017ec82b508efd4415fd53f1a4ed650ac882c9de
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	bde09e978763d       docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a   12 seconds ago      Running             dashboard-metrics-scraper   0                   7cae6917cefcf       dashboard-metrics-scraper-77bf4d6c4c-kmbr8   kubernetes-dashboard
	f08d85e29558b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf         13 seconds ago      Running             kubernetes-dashboard        0                   77b5886833c38       kubernetes-dashboard-855c9754f9-2cm7x        kubernetes-dashboard
	89da2a97eee3d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e              27 seconds ago      Exited              mount-munger                0                   563d030410e54       busybox-mount                                default
	32674543f8f8b       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712                  10 minutes ago      Running             myfrontend                  0                   d74655f2e4c17       sp-pod                                       default
	4da15bb9ef481       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                  10 minutes ago      Running             nginx                       0                   063e9aa9d1a82       nginx-svc                                    default
	8cffacc3405ae       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 3                   8066418bd9a33       kindnet-zqlkd                                kube-system
	2ed869e3e0a62       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     3                   425a83101b3bb       coredns-66bc5c9577-xq5br                     kube-system
	b0f4052cb88f7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         3                   5d02690ed79f6       storage-provisioner                          kube-system
	4b670f9e86226       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Running             kube-proxy                  3                   da7dd98c72615       kube-proxy-q5zbs                             kube-system
	50f7c1e1cc1b0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                 11 minutes ago      Running             kube-apiserver              0                   6a869f4b33c2f       kube-apiserver-functional-038709             kube-system
	41e99ecf22a17       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Running             kube-controller-manager     3                   7a6f1ff3a68d4       kube-controller-manager-functional-038709    kube-system
	159da8d7a665e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        3                   639c79d750f17       etcd-functional-038709                       kube-system
	b1cf8c42df67d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Running             kube-scheduler              3                   8539e925c1eb5       kube-scheduler-functional-038709             kube-system
	e20bdfc049305       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Exited              coredns                     2                   425a83101b3bb       coredns-66bc5c9577-xq5br                     kube-system
	2e22f7a22dedc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Exited              kube-controller-manager     2                   7a6f1ff3a68d4       kube-controller-manager-functional-038709    kube-system
	250272ea23dc0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Exited              storage-provisioner         2                   5d02690ed79f6       storage-provisioner                          kube-system
	93bdcc39a190a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Exited              kube-proxy                  2                   da7dd98c72615       kube-proxy-q5zbs                             kube-system
	8a65c8344a3d5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Exited              etcd                        2                   639c79d750f17       etcd-functional-038709                       kube-system
	29bf421f4b1a3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Exited              kindnet-cni                 2                   8066418bd9a33       kindnet-zqlkd                                kube-system
	d2ba236aca0b0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Exited              kube-scheduler              2                   8539e925c1eb5       kube-scheduler-functional-038709             kube-system
	
	
	==> coredns [2ed869e3e0a627afd76960d238db713e5a22df9ed405a26d83f347394da7f8dc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54424 - 22675 "HINFO IN 8488656318185690121.3815881775050075066. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033956473s
	
	
	==> coredns [e20bdfc049305dd014b15e00a0b89b34b564b8c5200be807356357106cd9a918] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53800 - 46439 "HINFO IN 6208268866169585827.8159923818263880145. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040914702s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-038709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-038709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=functional-038709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_18_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-038709
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:31:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:31:52 +0000   Thu, 20 Nov 2025 21:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:31:52 +0000   Thu, 20 Nov 2025 21:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:31:52 +0000   Thu, 20 Nov 2025 21:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:31:52 +0000   Thu, 20 Nov 2025 21:19:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-038709
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                612095a1-ebe7-4202-a5ae-94c333cd65e9
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-stx8w                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-4sgsr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-xq5br                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-functional-038709                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-zqlkd                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-functional-038709              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-038709     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-q5zbs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-038709              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-kmbr8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2cm7x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-038709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-038709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node functional-038709 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-038709 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-038709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-038709 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                node-controller  Node functional-038709 event: Registered Node functional-038709 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-038709 status is now: NodeReady
	  Warning  ContainerGCFailed        12m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-038709 event: Registered Node functional-038709 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-038709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-038709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-038709 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-038709 event: Registered Node functional-038709 in Controller
	
	
	==> dmesg <==
	[Nov20 19:44] overlayfs: idmapped layers are currently not supported
	[ +10.941558] overlayfs: idmapped layers are currently not supported
	[Nov20 19:45] overlayfs: idmapped layers are currently not supported
	[ +39.954456] overlayfs: idmapped layers are currently not supported
	[Nov20 19:46] overlayfs: idmapped layers are currently not supported
	[Nov20 19:48] overlayfs: idmapped layers are currently not supported
	[ +15.306261] overlayfs: idmapped layers are currently not supported
	[Nov20 19:49] overlayfs: idmapped layers are currently not supported
	[Nov20 19:50] overlayfs: idmapped layers are currently not supported
	[Nov20 19:51] overlayfs: idmapped layers are currently not supported
	[ +26.087379] overlayfs: idmapped layers are currently not supported
	[Nov20 19:52] overlayfs: idmapped layers are currently not supported
	[Nov20 19:53] overlayfs: idmapped layers are currently not supported
	[  +2.035111] overlayfs: idmapped layers are currently not supported
	[Nov20 19:54] overlayfs: idmapped layers are currently not supported
	[Nov20 19:55] overlayfs: idmapped layers are currently not supported
	[Nov20 19:56] overlayfs: idmapped layers are currently not supported
	[Nov20 19:57] overlayfs: idmapped layers are currently not supported
	[Nov20 19:58] overlayfs: idmapped layers are currently not supported
	[Nov20 19:59] overlayfs: idmapped layers are currently not supported
	[Nov20 20:04] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:08] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:11] overlayfs: idmapped layers are currently not supported
	[Nov20 21:17] overlayfs: idmapped layers are currently not supported
	[Nov20 21:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [159da8d7a665e68f7502051da21b7ff4575ba435fcbb079f636e03d228877997] <==
	{"level":"warn","ts":"2025-11-20T21:20:47.060391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.080322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.095878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.119390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.129761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.146412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.171360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.187377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.203837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.227538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.240505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.267676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.287820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.304285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.323350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.339479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.351753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.373788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.399459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.413115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.435624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:47.519486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38444","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:30:45.944494Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1152}
	{"level":"info","ts":"2025-11-20T21:30:45.968508Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1152,"took":"23.715554ms","hash":116133962,"current-db-size-bytes":3272704,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1511424,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-20T21:30:45.968573Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":116133962,"revision":1152,"compact-revision":-1}
	
	
	==> etcd [8a65c8344a3d56b0e2be2084c611a066c31f36ad811885ba6fbe613fbd22b1fc] <==
	{"level":"warn","ts":"2025-11-20T21:20:01.434461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:01.439906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:01.463219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:01.502652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:01.518447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:01.536551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:20:01.597024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35040","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:20:27.504942Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-20T21:20:27.504991Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-038709","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-20T21:20:27.505124Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-20T21:20:27.656080Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-20T21:20:27.656172Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T21:20:27.656200Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-20T21:20:27.656281Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-20T21:20:27.656339Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-20T21:20:27.656355Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-20T21:20:27.656394Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-20T21:20:27.656404Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-20T21:20:27.656447Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-20T21:20:27.656462Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-20T21:20:27.656471Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T21:20:27.660101Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-20T21:20:27.660200Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T21:20:27.660242Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-20T21:20:27.660249Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-038709","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 21:31:56 up  4:14,  0 user,  load average: 0.60, 0.43, 1.37
	Linux functional-038709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [29bf421f4b1a3ed25a5b2f485377470d79d552dcfb34aba6be3a33918b5d6068] <==
	I1120 21:19:59.029553       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:19:59.029770       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1120 21:19:59.029908       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:19:59.029920       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:19:59.029930       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:19:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:19:59.309029       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:19:59.309109       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:19:59.309142       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:19:59.309516       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:20:02.911093       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:20:02.911157       1 metrics.go:72] Registering metrics
	I1120 21:20:02.911223       1 controller.go:711] "Syncing nftables rules"
	I1120 21:20:09.308149       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:20:09.308233       1 main.go:301] handling current node
	I1120 21:20:19.307660       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:20:19.307694       1 main.go:301] handling current node
	
	
	==> kindnet [8cffacc3405ae972f1050cfc61ada8f00785e6f4368896dca05e91740bcfac74] <==
	I1120 21:29:49.230069       1 main.go:301] handling current node
	I1120 21:29:59.231165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:29:59.231217       1 main.go:301] handling current node
	I1120 21:30:09.231068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:30:09.231103       1 main.go:301] handling current node
	I1120 21:30:19.239076       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:30:19.239112       1 main.go:301] handling current node
	I1120 21:30:29.231534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:30:29.231569       1 main.go:301] handling current node
	I1120 21:30:39.229962       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:30:39.229997       1 main.go:301] handling current node
	I1120 21:30:49.229719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:30:49.229760       1 main.go:301] handling current node
	I1120 21:30:59.229838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:30:59.229953       1 main.go:301] handling current node
	I1120 21:31:09.230845       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:31:09.230935       1 main.go:301] handling current node
	I1120 21:31:19.235355       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:31:19.235390       1 main.go:301] handling current node
	I1120 21:31:29.230765       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:31:29.230812       1 main.go:301] handling current node
	I1120 21:31:39.230731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:31:39.230771       1 main.go:301] handling current node
	I1120 21:31:49.231065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:31:49.231110       1 main.go:301] handling current node
	
	
	==> kube-apiserver [50f7c1e1cc1b092a07e8d54904f51d67cd28c191444f528beeb1192d6aff4455] <==
	I1120 21:20:48.307870       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:20:48.325089       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:20:48.331295       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 21:20:48.355087       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:20:48.672310       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:20:49.080339       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:20:50.147090       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:20:50.299273       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:20:50.402283       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:20:50.417401       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:20:51.883978       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:20:51.932089       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:20:51.981320       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:21:06.295402       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.24.78"}
	E1120 21:21:10.203747       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1120 21:21:15.809646       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.27.226"}
	I1120 21:21:19.559911       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.56.153"}
	E1120 21:21:44.443459       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60722: use of closed network connection
	E1120 21:21:45.038315       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1120 21:21:53.398694       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:46350: use of closed network connection
	I1120 21:21:53.729233       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.170.5"}
	I1120 21:30:48.263007       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:31:36.631967       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:31:36.924786       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.27.224"}
	I1120 21:31:36.950142       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.43.196"}
	
	
	==> kube-controller-manager [2e22f7a22dedce70b07ac0db6e17bf54de78f0d231883fcffe237721cf9a1c4a] <==
	I1120 21:20:05.829008       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:20:05.829129       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 21:20:05.830308       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 21:20:05.833497       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:20:05.833568       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:20:05.835865       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:20:05.836986       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 21:20:05.841274       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:20:05.845150       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 21:20:05.845147       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 21:20:05.846195       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:20:05.846206       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 21:20:05.848586       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:20:05.848657       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:20:05.848764       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-038709"
	I1120 21:20:05.848816       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 21:20:05.853159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:20:05.854099       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 21:20:05.856377       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:20:05.857568       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:20:05.859782       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:20:05.860958       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:20:05.862088       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:20:05.868317       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:20:05.873585       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-controller-manager [41e99ecf22a1729c66d11c8cb9aff7191f2fbab974a41d9126d8f5404dc1b3f4] <==
	I1120 21:20:51.642240       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:20:51.645948       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:20:51.656698       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:20:51.656722       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:20:51.656826       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:20:51.656836       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:20:51.661905       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:20:51.667086       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:20:51.673402       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:20:51.675052       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:20:51.676445       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 21:20:51.676447       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:20:51.676485       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:20:51.684713       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:20:51.693168       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:20:51.693257       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:20:51.697337       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	E1120 21:31:36.732655       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 21:31:36.753488       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 21:31:36.761080       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 21:31:36.768441       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 21:31:36.774474       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 21:31:36.780991       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 21:31:36.792335       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 21:31:36.801115       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [4b670f9e86226c66c90c6564864310841c4782b1cbf5ab7a8f9967fb2e0f85b4] <==
	I1120 21:20:49.074554       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:20:49.194243       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:20:49.295573       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:20:49.295797       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 21:20:49.295931       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:20:49.354152       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:20:49.354209       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:20:49.361266       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:20:49.361572       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:20:49.361601       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:20:49.363799       1 config.go:200] "Starting service config controller"
	I1120 21:20:49.363828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:20:49.363847       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:20:49.363851       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:20:49.363869       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:20:49.363874       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:20:49.364515       1 config.go:309] "Starting node config controller"
	I1120 21:20:49.364535       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:20:49.364542       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:20:49.464780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:20:49.464822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:20:49.464850       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [93bdcc39a190a476bfe94ea6285aec57a66352a1b0ab5b9fc9e78d721807a078] <==
	I1120 21:20:03.195117       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:20:03.820387       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:20:03.923185       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:20:03.923307       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 21:20:03.927039       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:20:04.005399       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:20:04.005555       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:20:04.058036       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:20:04.058430       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:20:04.058695       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:20:04.060129       1 config.go:200] "Starting service config controller"
	I1120 21:20:04.060207       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:20:04.060253       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:20:04.060306       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:20:04.060346       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:20:04.060391       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:20:04.064138       1 config.go:309] "Starting node config controller"
	I1120 21:20:04.064216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:20:04.064233       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:20:04.161505       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:20:04.163202       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:20:04.163271       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b1cf8c42df67d6096d6ff55ceadb60eeb57f6012151213dd67f6819cb046e521] <==
	I1120 21:20:47.157781       1 serving.go:386] Generated self-signed cert in-memory
	W1120 21:20:48.158331       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 21:20:48.158426       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 21:20:48.158462       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 21:20:48.158490       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 21:20:48.353204       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:20:48.353235       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:20:48.355426       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:20:48.355511       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:20:48.355854       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:20:48.355929       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:20:48.456320       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [d2ba236aca0b018b1ccb8991f2c2eb0b274cbc23239cd0e8ea6ae6353d17ba03] <==
	I1120 21:20:01.980199       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:20:05.067696       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:20:05.067803       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:20:05.073009       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:20:05.073324       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 21:20:05.073389       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 21:20:05.073440       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:20:05.076038       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:20:05.076105       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:20:05.076163       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:20:05.076193       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:20:05.174034       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 21:20:05.176365       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:20:05.176362       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:20:27.495847       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1120 21:20:27.499625       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1120 21:20:27.499655       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1120 21:20:27.499681       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1120 21:20:27.499715       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:20:27.499737       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:20:27.499806       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1120 21:20:27.499836       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 20 21:30:53 functional-038709 kubelet[4087]: E1120 21:30:53.678434    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-stx8w" podUID="3e410898-b9da-4942-9160-bfb873b23068"
	Nov 20 21:30:54 functional-038709 kubelet[4087]: E1120 21:30:54.677901    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4sgsr" podUID="4a9d5b51-5ea3-402c-a6cf-fcc56c7dbb96"
	Nov 20 21:31:05 functional-038709 kubelet[4087]: E1120 21:31:05.677830    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4sgsr" podUID="4a9d5b51-5ea3-402c-a6cf-fcc56c7dbb96"
	Nov 20 21:31:08 functional-038709 kubelet[4087]: E1120 21:31:08.678047    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-stx8w" podUID="3e410898-b9da-4942-9160-bfb873b23068"
	Nov 20 21:31:20 functional-038709 kubelet[4087]: E1120 21:31:20.678735    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4sgsr" podUID="4a9d5b51-5ea3-402c-a6cf-fcc56c7dbb96"
	Nov 20 21:31:22 functional-038709 kubelet[4087]: E1120 21:31:22.678041    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-stx8w" podUID="3e410898-b9da-4942-9160-bfb873b23068"
	Nov 20 21:31:25 functional-038709 kubelet[4087]: I1120 21:31:25.541830    4087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/14422099-3edc-4a34-81b4-799a4c5ce2c4-test-volume\") pod \"busybox-mount\" (UID: \"14422099-3edc-4a34-81b4-799a4c5ce2c4\") " pod="default/busybox-mount"
	Nov 20 21:31:25 functional-038709 kubelet[4087]: I1120 21:31:25.541888    4087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdx7\" (UniqueName: \"kubernetes.io/projected/14422099-3edc-4a34-81b4-799a4c5ce2c4-kube-api-access-7zdx7\") pod \"busybox-mount\" (UID: \"14422099-3edc-4a34-81b4-799a4c5ce2c4\") " pod="default/busybox-mount"
	Nov 20 21:31:29 functional-038709 kubelet[4087]: I1120 21:31:29.671335    4087 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdx7\" (UniqueName: \"kubernetes.io/projected/14422099-3edc-4a34-81b4-799a4c5ce2c4-kube-api-access-7zdx7\") pod \"14422099-3edc-4a34-81b4-799a4c5ce2c4\" (UID: \"14422099-3edc-4a34-81b4-799a4c5ce2c4\") "
	Nov 20 21:31:29 functional-038709 kubelet[4087]: I1120 21:31:29.671418    4087 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/14422099-3edc-4a34-81b4-799a4c5ce2c4-test-volume\") pod \"14422099-3edc-4a34-81b4-799a4c5ce2c4\" (UID: \"14422099-3edc-4a34-81b4-799a4c5ce2c4\") "
	Nov 20 21:31:29 functional-038709 kubelet[4087]: I1120 21:31:29.671527    4087 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14422099-3edc-4a34-81b4-799a4c5ce2c4-test-volume" (OuterVolumeSpecName: "test-volume") pod "14422099-3edc-4a34-81b4-799a4c5ce2c4" (UID: "14422099-3edc-4a34-81b4-799a4c5ce2c4"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 20 21:31:29 functional-038709 kubelet[4087]: I1120 21:31:29.675287    4087 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14422099-3edc-4a34-81b4-799a4c5ce2c4-kube-api-access-7zdx7" (OuterVolumeSpecName: "kube-api-access-7zdx7") pod "14422099-3edc-4a34-81b4-799a4c5ce2c4" (UID: "14422099-3edc-4a34-81b4-799a4c5ce2c4"). InnerVolumeSpecName "kube-api-access-7zdx7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 20 21:31:29 functional-038709 kubelet[4087]: I1120 21:31:29.772018    4087 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/14422099-3edc-4a34-81b4-799a4c5ce2c4-test-volume\") on node \"functional-038709\" DevicePath \"\""
	Nov 20 21:31:29 functional-038709 kubelet[4087]: I1120 21:31:29.772066    4087 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7zdx7\" (UniqueName: \"kubernetes.io/projected/14422099-3edc-4a34-81b4-799a4c5ce2c4-kube-api-access-7zdx7\") on node \"functional-038709\" DevicePath \"\""
	Nov 20 21:31:30 functional-038709 kubelet[4087]: I1120 21:31:30.535677    4087 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="563d030410e5451be396ca3fe1444a582891bdfcd4b104691497a39a0879e080"
	Nov 20 21:31:33 functional-038709 kubelet[4087]: E1120 21:31:33.680383    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4sgsr" podUID="4a9d5b51-5ea3-402c-a6cf-fcc56c7dbb96"
	Nov 20 21:31:35 functional-038709 kubelet[4087]: E1120 21:31:35.679565    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-stx8w" podUID="3e410898-b9da-4942-9160-bfb873b23068"
	Nov 20 21:31:37 functional-038709 kubelet[4087]: I1120 21:31:37.033598    4087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwf4s\" (UniqueName: \"kubernetes.io/projected/1b788de4-7d82-49b9-ba75-ea4b75322d0c-kube-api-access-rwf4s\") pod \"dashboard-metrics-scraper-77bf4d6c4c-kmbr8\" (UID: \"1b788de4-7d82-49b9-ba75-ea4b75322d0c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kmbr8"
	Nov 20 21:31:37 functional-038709 kubelet[4087]: I1120 21:31:37.034188    4087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x9w5\" (UniqueName: \"kubernetes.io/projected/85bc919a-a74d-44fa-bf95-b9016d8a2a51-kube-api-access-2x9w5\") pod \"kubernetes-dashboard-855c9754f9-2cm7x\" (UID: \"85bc919a-a74d-44fa-bf95-b9016d8a2a51\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2cm7x"
	Nov 20 21:31:37 functional-038709 kubelet[4087]: I1120 21:31:37.034303    4087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/85bc919a-a74d-44fa-bf95-b9016d8a2a51-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-2cm7x\" (UID: \"85bc919a-a74d-44fa-bf95-b9016d8a2a51\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2cm7x"
	Nov 20 21:31:37 functional-038709 kubelet[4087]: I1120 21:31:37.034419    4087 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1b788de4-7d82-49b9-ba75-ea4b75322d0c-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-kmbr8\" (UID: \"1b788de4-7d82-49b9-ba75-ea4b75322d0c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-kmbr8"
	Nov 20 21:31:37 functional-038709 kubelet[4087]: W1120 21:31:37.470033    4087 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/49c79e190e0e1ffaafda32e2fda9a5e9ddcba33d0c48b35ae16874f2e523ce44/crio-7cae6917cefcf2d20113092d017ec82b508efd4415fd53f1a4ed650ac882c9de WatchSource:0}: Error finding container 7cae6917cefcf2d20113092d017ec82b508efd4415fd53f1a4ed650ac882c9de: Status 404 returned error can't find the container with id 7cae6917cefcf2d20113092d017ec82b508efd4415fd53f1a4ed650ac882c9de
	Nov 20 21:31:43 functional-038709 kubelet[4087]: I1120 21:31:43.591076    4087 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2cm7x" podStartSLOduration=3.13723699 podStartE2EDuration="7.591055764s" podCreationTimestamp="2025-11-20 21:31:36 +0000 UTC" firstStartedPulling="2025-11-20 21:31:37.216958457 +0000 UTC m=+653.742590039" lastFinishedPulling="2025-11-20 21:31:41.670777206 +0000 UTC m=+658.196408813" observedRunningTime="2025-11-20 21:31:42.589009943 +0000 UTC m=+659.114641533" watchObservedRunningTime="2025-11-20 21:31:43.591055764 +0000 UTC m=+660.116687346"
	Nov 20 21:31:44 functional-038709 kubelet[4087]: E1120 21:31:44.678292    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4sgsr" podUID="4a9d5b51-5ea3-402c-a6cf-fcc56c7dbb96"
	Nov 20 21:31:49 functional-038709 kubelet[4087]: E1120 21:31:49.679086    4087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-stx8w" podUID="3e410898-b9da-4942-9160-bfb873b23068"
	
	
	==> kubernetes-dashboard [f08d85e29558b17b55dbca31df10c88c6b9482cba6cf5be5b093ccb3f743edf9] <==
	2025/11/20 21:31:41 Starting overwatch
	2025/11/20 21:31:41 Using namespace: kubernetes-dashboard
	2025/11/20 21:31:41 Using in-cluster config to connect to apiserver
	2025/11/20 21:31:41 Using secret token for csrf signing
	2025/11/20 21:31:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 21:31:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 21:31:41 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 21:31:41 Generating JWE encryption key
	2025/11/20 21:31:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 21:31:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 21:31:42 Initializing JWE encryption key from synchronized object
	2025/11/20 21:31:42 Creating in-cluster Sidecar client
	2025/11/20 21:31:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:31:42 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [250272ea23dc04b963068b4499763c0c31feed41e20161d8b05dd94b963afc4c] <==
	I1120 21:19:59.839472       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:20:03.156481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:20:03.256443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 21:20:03.295941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:06.757196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:11.017639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:14.616152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:17.670288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:20.693064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:20.698872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:20:20.699199       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:20:20.701712       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-038709_dd4e04b1-a073-4a79-9d32-df9ca7e5dd33!
	W1120 21:20:20.702318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:20:20.707293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db270f61-7106-4fb9-82d3-eb60ccc3e819", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-038709_dd4e04b1-a073-4a79-9d32-df9ca7e5dd33 became leader
	W1120 21:20:20.720137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:20:20.802255       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-038709_dd4e04b1-a073-4a79-9d32-df9ca7e5dd33!
	W1120 21:20:22.723283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:22.729181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:24.732955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:24.738787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:26.742759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:20:26.748134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b0f4052cb88f7099e5651a56334e9fdacabc767369cd7acace26a7d76af085e2] <==
	W1120 21:31:31.396081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:33.400086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:33.405000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:35.408771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:35.415834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:37.419103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:37.427286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:39.441068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:39.445781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:41.448472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:41.453198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:43.456440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:43.463729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:45.467262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:45.471797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:47.477346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:47.482499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:49.485970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:49.494021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:51.497573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:51.503441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:53.506543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:53.513997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:55.517563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:31:55.527771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-038709 -n functional-038709
helpers_test.go:269: (dbg) Run:  kubectl --context functional-038709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-stx8w hello-node-connect-7d85dfc575-4sgsr
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-038709 describe pod busybox-mount hello-node-75c85bcc94-stx8w hello-node-connect-7d85dfc575-4sgsr
helpers_test.go:290: (dbg) kubectl --context functional-038709 describe pod busybox-mount hello-node-75c85bcc94-stx8w hello-node-connect-7d85dfc575-4sgsr:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-038709/192.168.49.2
	Start Time:       Thu, 20 Nov 2025 21:31:25 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://89da2a97eee3d61c3d83c81fbca38515932478dbde7f0868d1ae5e29e6c4c135
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 20 Nov 2025 21:31:27 +0000
	      Finished:     Thu, 20 Nov 2025 21:31:27 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zdx7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7zdx7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  31s   default-scheduler  Successfully assigned default/busybox-mount to functional-038709
	  Normal  Pulling    32s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     30s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.985s (1.985s including waiting). Image size: 3774172 bytes.
	  Normal  Created    30s   kubelet            Created container: mount-munger
	  Normal  Started    30s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-stx8w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-038709/192.168.49.2
	Start Time:       Thu, 20 Nov 2025 21:21:15 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wt8cz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wt8cz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-stx8w to functional-038709
	  Normal   Pulling    7m35s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m35s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m35s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    35s (x41 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     35s (x41 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-4sgsr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-038709/192.168.49.2
	Start Time:       Thu, 20 Nov 2025 21:21:53 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wvh7p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wvh7p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-4sgsr to functional-038709
	  Normal   Pulling    7m7s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m7s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x43 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image load --daemon kicbase/echo-server:functional-038709 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-038709" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image load --daemon kicbase/echo-server:functional-038709 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-038709" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-038709
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image load --daemon kicbase/echo-server:functional-038709 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-038709" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-038709 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-038709 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-stx8w" [3e410898-b9da-4942-9160-bfb873b23068] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-038709 -n functional-038709
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-20 21:31:16.1981098 +0000 UTC m=+1305.540964555
functional_test.go:1460: (dbg) Run:  kubectl --context functional-038709 describe po hello-node-75c85bcc94-stx8w -n default
functional_test.go:1460: (dbg) kubectl --context functional-038709 describe po hello-node-75c85bcc94-stx8w -n default:
Name:             hello-node-75c85bcc94-stx8w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-038709/192.168.49.2
Start Time:       Thu, 20 Nov 2025 21:21:15 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wt8cz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wt8cz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-stx8w to functional-038709
Normal   Pulling    6m54s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m54s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m54s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m48s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m33s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-038709 logs hello-node-75c85bcc94-stx8w -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-038709 logs hello-node-75c85bcc94-stx8w -n default: exit status 1 (98.109469ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-stx8w" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-038709 logs hello-node-75c85bcc94-stx8w -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image save kicbase/echo-server:functional-038709 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1120 21:21:17.616219  860545 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:21:17.616972  860545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:21:17.616986  860545 out.go:374] Setting ErrFile to fd 2...
	I1120 21:21:17.616992  860545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:21:17.617284  860545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:21:17.618021  860545 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:21:17.618179  860545 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:21:17.618696  860545 cli_runner.go:164] Run: docker container inspect functional-038709 --format={{.State.Status}}
	I1120 21:21:17.636250  860545 ssh_runner.go:195] Run: systemctl --version
	I1120 21:21:17.636313  860545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-038709
	I1120 21:21:17.653294  860545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/functional-038709/id_rsa Username:docker}
	I1120 21:21:17.754014  860545 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1120 21:21:17.754079  860545 cache_images.go:255] Failed to load cached images for "functional-038709": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1120 21:21:17.754103  860545 cache_images.go:267] failed pushing to: functional-038709

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-038709
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image save --daemon kicbase/echo-server:functional-038709 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-038709
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-038709: exit status 1 (18.119411ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-038709

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-038709

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 service --namespace=default --https --url hello-node: exit status 115 (397.312579ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32514
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-038709 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 service hello-node --url --format={{.IP}}: exit status 115 (402.152476ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-038709 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 service hello-node --url: exit status 115 (411.976188ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32514
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-038709 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32514
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (448.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 stop --alsologtostderr -v 5: (37.657965872s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 start --wait true --alsologtostderr -v 5
E1120 21:38:38.578350  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:38:59.681689  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:41:15.820254  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:41:43.523168  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:43:38.577762  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409851 start --wait true --alsologtostderr -v 5: exit status 80 (6m47.262423307s)

                                                
                                                
-- stdout --
	* [ha-409851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-409851" primary control-plane node in "ha-409851" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	* Enabled addons: 
	
	* Starting "ha-409851-m02" control-plane node in "ha-409851" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-409851-m03" control-plane node in "ha-409851" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-409851-m04" worker node in "ha-409851" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	  - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:38:30.769876  884264 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:38:30.770088  884264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:38:30.770114  884264 out.go:374] Setting ErrFile to fd 2...
	I1120 21:38:30.770133  884264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:38:30.770657  884264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:38:30.771309  884264 out.go:368] Setting JSON to false
	I1120 21:38:30.772185  884264 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15636,"bootTime":1763659075,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:38:30.772284  884264 start.go:143] virtualization:  
	I1120 21:38:30.775797  884264 out.go:179] * [ha-409851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:38:30.779473  884264 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:38:30.779630  884264 notify.go:221] Checking for updates...
	I1120 21:38:30.785039  884264 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:38:30.787825  884264 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:38:30.790672  884264 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:38:30.793534  884264 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:38:30.796313  884264 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:38:30.799725  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:30.799830  884264 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:38:30.836806  884264 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:38:30.836950  884264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:38:30.901769  884264 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:38:30.892669658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:38:30.901887  884264 docker.go:319] overlay module found
	I1120 21:38:30.904943  884264 out.go:179] * Using the docker driver based on existing profile
	I1120 21:38:30.907794  884264 start.go:309] selected driver: docker
	I1120 21:38:30.907812  884264 start.go:930] validating driver "docker" against &{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:38:30.907982  884264 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:38:30.908085  884264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:38:30.967881  884264 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:38:30.95851914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:38:30.968308  884264 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:38:30.968343  884264 cni.go:84] Creating CNI manager for ""
	I1120 21:38:30.968403  884264 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 21:38:30.968455  884264 start.go:353] cluster config:
	{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:38:30.971749  884264 out.go:179] * Starting "ha-409851" primary control-plane node in "ha-409851" cluster
	I1120 21:38:30.974680  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:38:30.977600  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:38:30.980407  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:38:30.980458  884264 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:38:30.980472  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:38:30.980485  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:38:30.980567  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:38:30.980578  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:38:30.980718  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:30.999616  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:38:30.999641  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:38:30.999654  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:38:30.999678  884264 start.go:360] acquireMachinesLock for ha-409851: {Name:mk8d4d263fd846febb903e54335147f9d639d302 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:38:30.999743  884264 start.go:364] duration metric: took 37.309µs to acquireMachinesLock for "ha-409851"
	I1120 21:38:30.999781  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:38:30.999790  884264 fix.go:54] fixHost starting: 
	I1120 21:38:31.000072  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:38:31.018393  884264 fix.go:112] recreateIfNeeded on ha-409851: state=Stopped err=<nil>
	W1120 21:38:31.018439  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:38:31.021858  884264 out.go:252] * Restarting existing docker container for "ha-409851" ...
	I1120 21:38:31.021974  884264 cli_runner.go:164] Run: docker start ha-409851
	I1120 21:38:31.304211  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:38:31.327776  884264 kic.go:430] container "ha-409851" state is running.
	I1120 21:38:31.328187  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:38:31.353945  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:31.354443  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:38:31.354512  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:31.382173  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:31.382524  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:31.382534  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:38:31.383289  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:38:34.531685  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:38:34.531763  884264 ubuntu.go:182] provisioning hostname "ha-409851"
	I1120 21:38:34.531863  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:34.551282  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:34.551609  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:34.551626  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851 && echo "ha-409851" | sudo tee /etc/hostname
	I1120 21:38:34.704765  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:38:34.704852  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:34.723366  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:34.723694  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:34.723717  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:38:34.867982  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:38:34.868025  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:38:34.868088  884264 ubuntu.go:190] setting up certificates
	I1120 21:38:34.868104  884264 provision.go:84] configureAuth start
	I1120 21:38:34.868188  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:38:34.887153  884264 provision.go:143] copyHostCerts
	I1120 21:38:34.887208  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:34.887270  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:38:34.887291  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:34.887383  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:38:34.887509  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:34.887538  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:38:34.887549  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:34.887584  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:38:34.887659  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:34.887686  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:38:34.887694  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:34.887724  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:38:34.887782  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851 san=[127.0.0.1 192.168.49.2 ha-409851 localhost minikube]
	I1120 21:38:35.400008  884264 provision.go:177] copyRemoteCerts
	I1120 21:38:35.400088  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:38:35.400141  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:35.418360  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:35.518831  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:38:35.518950  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:38:35.537804  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:38:35.537900  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1120 21:38:35.556580  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:38:35.556644  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:38:35.575458  884264 provision.go:87] duration metric: took 707.296985ms to configureAuth
	I1120 21:38:35.575487  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:38:35.575723  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:35.575844  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:35.594086  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:35.594409  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:35.594430  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:38:35.962817  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:38:35.962837  884264 machine.go:97] duration metric: took 4.608380541s to provisionDockerMachine
	I1120 21:38:35.962848  884264 start.go:293] postStartSetup for "ha-409851" (driver="docker")
	I1120 21:38:35.962859  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:38:35.962920  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:38:35.962989  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:35.984847  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.091216  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:38:36.094852  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:38:36.094880  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:38:36.094891  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:38:36.094947  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:38:36.095090  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:38:36.095099  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:38:36.095212  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:38:36.102846  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:36.120698  884264 start.go:296] duration metric: took 157.834355ms for postStartSetup
	I1120 21:38:36.120824  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:38:36.120914  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:36.138055  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.236342  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:38:36.241086  884264 fix.go:56] duration metric: took 5.241287155s for fixHost
	I1120 21:38:36.241113  884264 start.go:83] releasing machines lock for "ha-409851", held for 5.241354183s
	I1120 21:38:36.241193  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:38:36.259831  884264 ssh_runner.go:195] Run: cat /version.json
	I1120 21:38:36.259893  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:36.260152  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:38:36.260229  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:36.287560  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.292613  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.386937  884264 ssh_runner.go:195] Run: systemctl --version
	I1120 21:38:36.496830  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:38:36.537327  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:38:36.541923  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:38:36.542024  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:38:36.549865  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:38:36.549933  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:38:36.549983  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:38:36.550070  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:38:36.565179  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:38:36.578552  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:38:36.578675  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:38:36.594881  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:38:36.608683  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:38:36.731342  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:38:36.868669  884264 docker.go:234] disabling docker service ...
	I1120 21:38:36.868857  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:38:36.886109  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:38:36.900226  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:38:37.014736  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:38:37.144034  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:38:37.158890  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:38:37.173954  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:38:37.174053  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.183273  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:38:37.183345  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.192471  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.201342  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.210418  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:38:37.218694  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.227957  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.236515  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.245491  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:38:37.253272  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:38:37.260653  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:37.378780  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:38:37.568343  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:38:37.568517  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:38:37.572886  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:38:37.572998  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:38:37.576787  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:38:37.603768  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:38:37.603878  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:37.634707  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:37.668026  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:38:37.670996  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:38:37.688086  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:38:37.692097  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:37.702318  884264 kubeadm.go:884] updating cluster {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:38:37.702473  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:38:37.702533  884264 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:38:37.738810  884264 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:38:37.738882  884264 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:38:37.739011  884264 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:38:37.764274  884264 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:38:37.764295  884264 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:38:37.764305  884264 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 21:38:37.764401  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:38:37.764481  884264 ssh_runner.go:195] Run: crio config
	I1120 21:38:37.825630  884264 cni.go:84] Creating CNI manager for ""
	I1120 21:38:37.825661  884264 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 21:38:37.825685  884264 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:38:37.825743  884264 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-409851 NodeName:ha-409851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:38:37.825905  884264 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-409851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:38:37.825931  884264 kube-vip.go:115] generating kube-vip config ...
	I1120 21:38:37.825986  884264 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:38:37.839066  884264 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:38:37.839175  884264 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:38:37.839248  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:38:37.847133  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:38:37.847235  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 21:38:37.855412  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 21:38:37.868477  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:38:37.881823  884264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1120 21:38:37.895195  884264 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:38:37.908845  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:38:37.912943  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:37.923133  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:38.049716  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:38:38.067155  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.2
	I1120 21:38:38.067178  884264 certs.go:195] generating shared ca certs ...
	I1120 21:38:38.067197  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:38.067386  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:38:38.067464  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:38:38.067494  884264 certs.go:257] generating profile certs ...
	I1120 21:38:38.067639  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:38:38.067683  884264 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56
	I1120 21:38:38.067722  884264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1120 21:38:38.134399  884264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56 ...
	I1120 21:38:38.134432  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56: {Name:mk7acbd3c6c1dd357ee45d74f751ed3339a8f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:38.134668  884264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56 ...
	I1120 21:38:38.134693  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56: {Name:mkd0412497c04b2292f00ce455371fa1840c4bc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:38.134834  884264 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt
	I1120 21:38:38.135032  884264 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key
	I1120 21:38:38.135229  884264 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:38:38.135248  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:38:38.135280  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:38:38.135304  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:38:38.135321  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:38:38.135350  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:38:38.135384  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:38:38.135407  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:38:38.135423  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:38:38.135493  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:38:38.135556  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:38:38.135571  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:38:38.135614  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:38:38.135660  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:38:38.135691  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:38:38.135764  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:38.135818  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.135841  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.135858  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.136478  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:38:38.161386  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:38:38.183426  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:38:38.209571  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:38:38.230449  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:38:38.269189  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:38:38.290285  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:38:38.310366  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:38:38.336702  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:38:38.356298  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:38:38.377772  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:38:38.397354  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:38:38.410774  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:38:38.417590  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.426055  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:38:38.435256  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.442057  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.442128  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.484356  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:38:38.492206  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.499992  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:38:38.507965  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.512359  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.512476  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.554117  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:38:38.562052  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.569885  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:38:38.578289  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.582380  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.582505  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.624140  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:38:38.633756  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:38:38.637748  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:38:38.679477  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:38:38.725454  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:38:38.767445  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:38:38.816551  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:38:38.874060  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:38:38.945404  884264 kubeadm.go:401] StartCluster: {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:38:38.945592  884264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:38:38.945702  884264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:38:39.035653  884264 cri.go:89] found id: "5c78de3db456c35c2eafd8be0e59c965664f006cb3e9b19c4d9b05b81ab079fc"
	I1120 21:38:39.035728  884264 cri.go:89] found id: "be96e9e3ffb4708dccf24988f485136e1039f591a2e9c93edef5d830431fa080"
	I1120 21:38:39.035748  884264 cri.go:89] found id: "b40d2cfd438a8dc3a5f89de00510928701b9ef1887f2f4f9055a3978ea2197fa"
	I1120 21:38:39.035769  884264 cri.go:89] found id: "696b700dcb568291344392af5fbbff9e59bb78b02bbbf2fa18e2156bab42fae1"
	I1120 21:38:39.035804  884264 cri.go:89] found id: "bbe2aa5c20be55307484a6dc5e0cf27f1adb8b5e2bad7448657394d0153a3e84"
	I1120 21:38:39.035846  884264 cri.go:89] found id: ""
	I1120 21:38:39.035929  884264 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:38:39.060419  884264 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:38:39Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:38:39.060556  884264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:38:39.074901  884264 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:38:39.074968  884264 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:38:39.075123  884264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:38:39.088673  884264 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:38:39.089259  884264 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-409851" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:38:39.089441  884264 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "ha-409851" cluster setting kubeconfig missing "ha-409851" context setting]
	I1120 21:38:39.089845  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:39.090518  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:38:39.091335  884264 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 21:38:39.091424  884264 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 21:38:39.091402  884264 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 21:38:39.091527  884264 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 21:38:39.091559  884264 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 21:38:39.091579  884264 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 21:38:39.091949  884264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:38:39.104395  884264 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 21:38:39.104468  884264 kubeadm.go:602] duration metric: took 29.411064ms to restartPrimaryControlPlane
	I1120 21:38:39.104495  884264 kubeadm.go:403] duration metric: took 159.115003ms to StartCluster
	I1120 21:38:39.104539  884264 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:39.104635  884264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:38:39.105401  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:39.105666  884264 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:38:39.105723  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:38:39.105753  884264 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:38:39.106516  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:39.111744  884264 out.go:179] * Enabled addons: 
	I1120 21:38:39.114735  884264 addons.go:515] duration metric: took 8.971082ms for enable addons: enabled=[]
	I1120 21:38:39.114834  884264 start.go:247] waiting for cluster config update ...
	I1120 21:38:39.114858  884264 start.go:256] writing updated cluster config ...
	I1120 21:38:39.118409  884264 out.go:203] 
	I1120 21:38:39.121722  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:39.121897  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:39.125210  884264 out.go:179] * Starting "ha-409851-m02" control-plane node in "ha-409851" cluster
	I1120 21:38:39.128166  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:38:39.131274  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:38:39.134220  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:38:39.134243  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:38:39.134349  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:38:39.134358  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:38:39.134481  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:39.134707  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:38:39.163368  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:38:39.163387  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:38:39.163399  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:38:39.163424  884264 start.go:360] acquireMachinesLock for ha-409851-m02: {Name:mka809540f7c511f76e83dac3b1218011243fbec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:38:39.163475  884264 start.go:364] duration metric: took 37.473µs to acquireMachinesLock for "ha-409851-m02"
	I1120 21:38:39.163495  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:38:39.163500  884264 fix.go:54] fixHost starting: m02
	I1120 21:38:39.163761  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:38:39.188597  884264 fix.go:112] recreateIfNeeded on ha-409851-m02: state=Stopped err=<nil>
	W1120 21:38:39.188621  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:38:39.197319  884264 out.go:252] * Restarting existing docker container for "ha-409851-m02" ...
	I1120 21:38:39.197414  884264 cli_runner.go:164] Run: docker start ha-409851-m02
	I1120 21:38:39.580228  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:38:39.619726  884264 kic.go:430] container "ha-409851-m02" state is running.
	I1120 21:38:39.620289  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:38:39.645172  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:39.645452  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:38:39.645526  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:39.670151  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:39.670895  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:39.670954  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:38:39.671692  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44478->127.0.0.1:33922: read: connection reset by peer
	I1120 21:38:42.978516  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:38:42.978591  884264 ubuntu.go:182] provisioning hostname "ha-409851-m02"
	I1120 21:38:42.978693  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:43.005096  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:43.005433  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:43.005447  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m02 && echo "ha-409851-m02" | sudo tee /etc/hostname
	I1120 21:38:43.320783  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:38:43.320866  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:43.374875  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:43.375237  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:43.375260  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:38:43.620767  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:38:43.620794  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:38:43.620810  884264 ubuntu.go:190] setting up certificates
	I1120 21:38:43.620821  884264 provision.go:84] configureAuth start
	I1120 21:38:43.620881  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:38:43.659411  884264 provision.go:143] copyHostCerts
	I1120 21:38:43.659453  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:43.659485  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:38:43.659493  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:43.659567  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:38:43.659644  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:43.659661  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:38:43.659665  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:43.659690  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:38:43.659728  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:43.659743  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:38:43.659747  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:43.659768  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:38:43.659814  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m02 san=[127.0.0.1 192.168.49.3 ha-409851-m02 localhost minikube]
	I1120 21:38:44.333480  884264 provision.go:177] copyRemoteCerts
	I1120 21:38:44.333555  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:38:44.333605  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:44.352064  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:44.461767  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:38:44.461834  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:38:44.500018  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:38:44.500084  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:38:44.547484  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:38:44.547557  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:38:44.596357  884264 provision.go:87] duration metric: took 975.522241ms to configureAuth
	I1120 21:38:44.596401  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:38:44.596654  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:44.596788  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:44.624344  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:44.624651  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:44.624670  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:38:45.322074  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:38:45.322113  884264 machine.go:97] duration metric: took 5.676650753s to provisionDockerMachine
	I1120 21:38:45.322128  884264 start.go:293] postStartSetup for "ha-409851-m02" (driver="docker")
	I1120 21:38:45.322141  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:38:45.322226  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:38:45.322277  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.342731  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:45.453499  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:38:45.470888  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:38:45.470938  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:38:45.470950  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:38:45.471014  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:38:45.471096  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:38:45.471109  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:38:45.471214  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:38:45.489273  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:45.556457  884264 start.go:296] duration metric: took 234.311564ms for postStartSetup
	I1120 21:38:45.556611  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:38:45.556676  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.587707  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:45.729685  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:38:45.740986  884264 fix.go:56] duration metric: took 6.577477813s for fixHost
	I1120 21:38:45.741008  884264 start.go:83] releasing machines lock for "ha-409851-m02", held for 6.577525026s
	I1120 21:38:45.741083  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:38:45.771820  884264 out.go:179] * Found network options:
	I1120 21:38:45.774905  884264 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 21:38:45.777764  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:38:45.777810  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:38:45.777890  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:38:45.777942  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.778213  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:38:45.778264  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.814965  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:45.816280  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:46.130838  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:38:46.136697  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:38:46.136780  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:38:46.154525  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:38:46.154562  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:38:46.154596  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:38:46.154657  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:38:46.179167  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:38:46.198207  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:38:46.198285  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:38:46.220547  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:38:46.238372  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:38:46.474214  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:38:46.692069  884264 docker.go:234] disabling docker service ...
	I1120 21:38:46.692151  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:38:46.711611  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:38:46.733293  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:38:46.937783  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:38:47.161295  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:38:47.177649  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:38:47.196405  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:38:47.196499  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.211080  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:38:47.211159  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.226280  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.241556  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.251537  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:38:47.263194  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.279048  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.292565  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.305383  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:38:47.318266  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:38:47.330851  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:47.572162  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:38:47.826907  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:38:47.827027  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:38:47.830650  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:38:47.830757  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:38:47.834471  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:38:47.858658  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:38:47.858770  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:47.887568  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:47.924184  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:38:47.927160  884264 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:38:47.930191  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:38:47.947316  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:38:47.951294  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:47.961645  884264 mustload.go:66] Loading cluster: ha-409851
	I1120 21:38:47.961891  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:47.962176  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:38:47.978704  884264 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:38:47.979070  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.3
	I1120 21:38:47.979083  884264 certs.go:195] generating shared ca certs ...
	I1120 21:38:47.979100  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:47.979221  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:38:47.979265  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:38:47.979275  884264 certs.go:257] generating profile certs ...
	I1120 21:38:47.979366  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:38:47.979435  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.36974727
	I1120 21:38:47.979478  884264 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:38:47.979491  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:38:47.979505  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:38:47.979525  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:38:47.979536  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:38:47.979550  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:38:47.979561  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:38:47.979576  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:38:47.979587  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:38:47.979641  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:38:47.979672  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:38:47.979689  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:38:47.979713  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:38:47.979738  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:38:47.979762  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:38:47.979804  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:47.979840  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:38:47.979855  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:38:47.979869  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:47.979929  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:47.996700  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:48.095431  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:38:48.099410  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:38:48.107940  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:38:48.111757  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:38:48.120021  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:38:48.123592  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:38:48.132027  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:38:48.135667  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:38:48.143707  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:38:48.147064  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:38:48.155777  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:38:48.159326  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:38:48.168074  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:38:48.187052  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:38:48.204261  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:38:48.222484  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:38:48.239999  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:38:48.257750  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:38:48.275489  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:38:48.293203  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:38:48.310644  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:38:48.333442  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:38:48.353223  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:38:48.371976  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:38:48.384868  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:38:48.397625  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:38:48.410587  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:38:48.423732  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:38:48.437291  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:38:48.449732  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:38:48.462200  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:38:48.468726  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.476219  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:38:48.483790  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.487957  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.488071  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.529603  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:38:48.541715  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.551230  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:38:48.560557  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.566086  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.566214  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.614556  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:38:48.622341  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.630607  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:38:48.638692  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.642390  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.642458  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.683660  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:38:48.692961  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:38:48.697105  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:38:48.738157  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:38:48.779134  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:38:48.820771  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:38:48.861964  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:38:48.903079  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:38:48.946240  884264 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 21:38:48.946401  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:38:48.946432  884264 kube-vip.go:115] generating kube-vip config ...
	I1120 21:38:48.946494  884264 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:38:48.959247  884264 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:38:48.959318  884264 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:38:48.959400  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:38:48.967383  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:38:48.967482  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:38:48.975230  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:38:48.988715  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:38:49.001843  884264 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:38:49.019090  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:38:49.023118  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:49.034137  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:49.154884  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:38:49.169065  884264 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:38:49.169534  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:49.173571  884264 out.go:179] * Verifying Kubernetes components...
	I1120 21:38:49.176570  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:49.315404  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:38:49.329975  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:38:49.330049  884264 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:38:49.330298  884264 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m02" to be "Ready" ...
	W1120 21:38:59.331759  884264 node_ready.go:55] error getting node "ha-409851-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-409851-m02": net/http: TLS handshake timeout
	I1120 21:39:02.652543  884264 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-409851-m02"
	W1120 21:39:12.654218  884264 node_ready.go:55] error getting node "ha-409851-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-409851-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.49.1:48284->192.168.49.2:8443: read: connection reset by peer
	I1120 21:39:13.752634  884264 node_ready.go:49] node "ha-409851-m02" is "Ready"
	I1120 21:39:13.752662  884264 node_ready.go:38] duration metric: took 24.422335125s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:39:13.752675  884264 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:39:13.752734  884264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:39:13.802621  884264 api_server.go:72] duration metric: took 24.633509474s to wait for apiserver process to appear ...
	I1120 21:39:13.802644  884264 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:39:13.802666  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:13.846540  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:39:13.846565  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:39:14.303057  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:14.317076  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:14.317121  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:14.803756  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:14.835165  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:14.835252  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:15.302766  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:15.327917  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:15.327996  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:15.802846  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:15.844402  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:15.844486  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:16.302774  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:16.349139  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:39:16.355368  884264 api_server.go:141] control plane version: v1.34.1
	I1120 21:39:16.355451  884264 api_server.go:131] duration metric: took 2.552797549s to wait for apiserver health ...
	I1120 21:39:16.355475  884264 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:39:16.388991  884264 system_pods.go:59] 26 kube-system pods found
	I1120 21:39:16.389076  884264 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:39:16.389097  884264 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:39:16.389116  884264 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:39:16.389161  884264 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:39:16.389188  884264 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:39:16.389225  884264 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:39:16.389248  884264 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:39:16.389268  884264 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:39:16.389303  884264 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:39:16.389327  884264 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:39:16.389347  884264 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:39:16.389382  884264 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:39:16.389405  884264 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:39:16.389426  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:39:16.389462  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:39:16.389484  884264 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:39:16.389501  884264 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:39:16.389582  884264 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:39:16.389609  884264 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:39:16.389631  884264 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:39:16.389670  884264 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:39:16.389696  884264 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:39:16.389718  884264 system_pods.go:61] "kube-vip-ha-409851" [714ee0ad-584f-4bd3-b031-cc6e2485512c] Running
	I1120 21:39:16.389753  884264 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:39:16.389791  884264 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:39:16.389812  884264 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:39:16.389848  884264 system_pods.go:74] duration metric: took 34.353977ms to wait for pod list to return data ...
	I1120 21:39:16.389871  884264 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:39:16.416752  884264 default_sa.go:45] found service account: "default"
	I1120 21:39:16.416829  884264 default_sa.go:55] duration metric: took 26.934653ms for default service account to be created ...
	I1120 21:39:16.416854  884264 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:39:16.495655  884264 system_pods.go:86] 26 kube-system pods found
	I1120 21:39:16.495738  884264 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:39:16.495762  884264 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:39:16.495799  884264 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:39:16.495829  884264 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:39:16.495850  884264 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:39:16.495891  884264 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:39:16.495919  884264 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:39:16.495943  884264 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:39:16.495976  884264 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:39:16.496003  884264 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:39:16.496027  884264 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:39:16.496065  884264 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:39:16.496119  884264 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:39:16.496154  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:39:16.496175  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:39:16.496206  884264 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:39:16.496230  884264 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:39:16.496253  884264 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:39:16.496290  884264 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:39:16.496316  884264 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:39:16.496339  884264 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:39:16.496376  884264 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:39:16.496404  884264 system_pods.go:89] "kube-vip-ha-409851" [714ee0ad-584f-4bd3-b031-cc6e2485512c] Running
	I1120 21:39:16.496424  884264 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:39:16.496462  884264 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:39:16.496488  884264 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:39:16.496514  884264 system_pods.go:126] duration metric: took 79.640825ms to wait for k8s-apps to be running ...
	I1120 21:39:16.496549  884264 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:39:16.496649  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:39:16.525131  884264 system_svc.go:56] duration metric: took 28.572383ms WaitForService to wait for kubelet
	I1120 21:39:16.525221  884264 kubeadm.go:587] duration metric: took 27.356113948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:39:16.525256  884264 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:39:16.547500  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547592  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547622  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547645  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547686  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547706  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547727  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547760  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547787  884264 node_conditions.go:105] duration metric: took 22.508874ms to run NodePressure ...
	I1120 21:39:16.547814  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:39:16.547869  884264 start.go:256] writing updated cluster config ...
	I1120 21:39:16.551433  884264 out.go:203] 
	I1120 21:39:16.554880  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:39:16.555111  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:39:16.558694  884264 out.go:179] * Starting "ha-409851-m03" control-plane node in "ha-409851" cluster
	I1120 21:39:16.562364  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:39:16.565426  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:39:16.568528  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:39:16.568640  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:39:16.568611  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:39:16.568996  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:39:16.569028  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:39:16.569191  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:39:16.590195  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:39:16.590214  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:39:16.590225  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:39:16.590248  884264 start.go:360] acquireMachinesLock for ha-409851-m03: {Name:mkdc61c72ab6a67582f9ee213a06b683b619e587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:39:16.590297  884264 start.go:364] duration metric: took 34.011µs to acquireMachinesLock for "ha-409851-m03"
	I1120 21:39:16.590316  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:39:16.590321  884264 fix.go:54] fixHost starting: m03
	I1120 21:39:16.590574  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m03 --format={{.State.Status}}
	I1120 21:39:16.615086  884264 fix.go:112] recreateIfNeeded on ha-409851-m03: state=Stopped err=<nil>
	W1120 21:39:16.615115  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:39:16.618135  884264 out.go:252] * Restarting existing docker container for "ha-409851-m03" ...
	I1120 21:39:16.618225  884264 cli_runner.go:164] Run: docker start ha-409851-m03
	I1120 21:39:16.978914  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m03 --format={{.State.Status}}
	I1120 21:39:17.006894  884264 kic.go:430] container "ha-409851-m03" state is running.
	I1120 21:39:17.007317  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:39:17.038413  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:39:17.038674  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:39:17.038742  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:17.068281  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:17.068584  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:17.068592  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:39:17.070869  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:39:20.309993  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m03
	
	I1120 21:39:20.310063  884264 ubuntu.go:182] provisioning hostname "ha-409851-m03"
	I1120 21:39:20.310163  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:20.336716  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:20.337029  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:20.337043  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m03 && echo "ha-409851-m03" | sudo tee /etc/hostname
	I1120 21:39:20.816264  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m03
	
	I1120 21:39:20.816432  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:20.846177  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:20.846510  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:20.846531  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:39:21.112630  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:39:21.112715  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:39:21.112747  884264 ubuntu.go:190] setting up certificates
	I1120 21:39:21.112788  884264 provision.go:84] configureAuth start
	I1120 21:39:21.112872  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:39:21.141385  884264 provision.go:143] copyHostCerts
	I1120 21:39:21.141425  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:39:21.141458  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:39:21.141465  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:39:21.141537  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:39:21.141610  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:39:21.141626  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:39:21.141631  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:39:21.141657  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:39:21.141696  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:39:21.141713  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:39:21.141717  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:39:21.141739  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:39:21.141793  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m03 san=[127.0.0.1 192.168.49.4 ha-409851-m03 localhost minikube]
	I1120 21:39:21.285547  884264 provision.go:177] copyRemoteCerts
	I1120 21:39:21.285659  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:39:21.285756  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:21.304352  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:21.419419  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:39:21.419479  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:39:21.455413  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:39:21.455471  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:39:21.499343  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:39:21.499449  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:39:21.553711  884264 provision.go:87] duration metric: took 440.893582ms to configureAuth
	I1120 21:39:21.553743  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:39:21.553979  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:39:21.554094  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:21.579157  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:21.579463  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:21.579484  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:39:22.222733  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:39:22.222764  884264 machine.go:97] duration metric: took 5.184080337s to provisionDockerMachine
	I1120 21:39:22.222784  884264 start.go:293] postStartSetup for "ha-409851-m03" (driver="docker")
	I1120 21:39:22.222795  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:39:22.222869  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:39:22.222949  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.258502  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.366087  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:39:22.370384  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:39:22.370464  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:39:22.370490  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:39:22.370582  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:39:22.370714  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:39:22.370740  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:39:22.370890  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:39:22.380356  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:39:22.405408  884264 start.go:296] duration metric: took 182.600947ms for postStartSetup
	I1120 21:39:22.405514  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:39:22.405570  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.425307  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.524350  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:39:22.529911  884264 fix.go:56] duration metric: took 5.939581904s for fixHost
	I1120 21:39:22.529937  884264 start.go:83] releasing machines lock for "ha-409851-m03", held for 5.939631735s
	I1120 21:39:22.530012  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:39:22.551424  884264 out.go:179] * Found network options:
	I1120 21:39:22.560397  884264 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 21:39:22.563475  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:39:22.563504  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:39:22.563526  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:39:22.563536  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:39:22.563629  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:39:22.563664  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:39:22.563687  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.563722  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.593348  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.599158  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.850591  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:39:22.957812  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:39:22.957885  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:39:22.971629  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:39:22.971651  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:39:22.971683  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:39:22.971740  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:39:22.992266  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:39:23.017885  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:39:23.018003  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:39:23.047686  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:39:23.071594  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:39:23.341231  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:39:23.618998  884264 docker.go:234] disabling docker service ...
	I1120 21:39:23.619120  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:39:23.641818  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:39:23.676773  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:39:23.963173  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:39:24.189401  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:39:24.206793  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:39:24.222800  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:39:24.222943  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.233205  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:39:24.233339  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.242572  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.252400  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.262758  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:39:24.283691  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.293195  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.301843  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.310942  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:39:24.319806  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:39:24.328026  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:39:24.598997  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:40:54.919407  884264 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320335625s)
	I1120 21:40:54.919437  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:40:54.919501  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:40:54.923827  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:40:54.923896  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:40:54.927766  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:40:54.956875  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:40:54.956961  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:40:54.989990  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:40:55.031599  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:40:55.034874  884264 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:40:55.042500  884264 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:40:55.050091  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:40:55.084630  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:40:55.090169  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:40:55.103094  884264 mustload.go:66] Loading cluster: ha-409851
	I1120 21:40:55.103394  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:40:55.103694  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:40:55.127072  884264 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:40:55.127420  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.4
	I1120 21:40:55.127444  884264 certs.go:195] generating shared ca certs ...
	I1120 21:40:55.127465  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:40:55.127604  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:40:55.127650  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:40:55.127662  884264 certs.go:257] generating profile certs ...
	I1120 21:40:55.127765  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:40:55.127891  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.b859e16b
	I1120 21:40:55.127933  884264 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:40:55.127943  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:40:55.127956  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:40:55.127969  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:40:55.127980  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:40:55.127992  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:40:55.128006  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:40:55.128033  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:40:55.128045  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:40:55.128112  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:40:55.128145  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:40:55.128160  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:40:55.128187  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:40:55.128214  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:40:55.128241  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:40:55.128290  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:40:55.128326  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.128344  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.128357  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.128426  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:40:55.150727  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:40:55.251340  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:40:55.256433  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:40:55.266784  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:40:55.270534  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:40:55.279775  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:40:55.284275  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:40:55.294321  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:40:55.298684  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:40:55.307319  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:40:55.310734  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:40:55.319458  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:40:55.323063  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:40:55.331533  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:40:55.350148  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:40:55.371874  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:40:55.394257  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:40:55.416142  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:40:55.436749  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:40:55.457715  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:40:55.490155  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:40:55.512635  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:40:55.534827  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:40:55.566135  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:40:55.588247  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:40:55.601998  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:40:55.617348  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:40:55.631678  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:40:55.644956  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:40:55.658910  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:40:55.674549  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:40:55.689850  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:40:55.697169  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.706702  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:40:55.715708  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.719673  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.719798  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.761953  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:40:55.770722  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.779665  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:40:55.796200  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.800339  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.800460  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.842260  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:40:55.849720  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.857782  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:40:55.865998  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.870179  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.870265  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.917536  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:40:55.925307  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:40:55.929384  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:40:55.971056  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:40:56.013165  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:40:56.055581  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:40:56.098307  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:40:56.140587  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:40:56.181956  884264 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1120 21:40:56.182053  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:40:56.182091  884264 kube-vip.go:115] generating kube-vip config ...
	I1120 21:40:56.182144  884264 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:40:56.195065  884264 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:40:56.195123  884264 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:40:56.195188  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:40:56.203155  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:40:56.203249  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:40:56.210881  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:40:56.226182  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:40:56.241370  884264 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:40:56.258633  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:40:56.262629  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:40:56.274206  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:40:56.407402  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:40:56.425980  884264 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:40:56.426593  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:40:56.429208  884264 out.go:179] * Verifying Kubernetes components...
	I1120 21:40:56.432088  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:40:56.603926  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:40:56.618659  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:40:56.618769  884264 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:40:56.619068  884264 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m03" to be "Ready" ...
	W1120 21:40:58.623454  884264 node_ready.go:57] node "ha-409851-m03" has "Ready":"Unknown" status (will retry)
	W1120 21:41:00.623718  884264 node_ready.go:57] node "ha-409851-m03" has "Ready":"Unknown" status (will retry)
	I1120 21:41:03.122881  884264 node_ready.go:49] node "ha-409851-m03" is "Ready"
	I1120 21:41:03.122915  884264 node_ready.go:38] duration metric: took 6.503802683s for node "ha-409851-m03" to be "Ready" ...
	I1120 21:41:03.122931  884264 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:41:03.123035  884264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:41:03.138113  884264 api_server.go:72] duration metric: took 6.712035257s to wait for apiserver process to appear ...
	I1120 21:41:03.138137  884264 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:41:03.138156  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:41:03.152932  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:41:03.154364  884264 api_server.go:141] control plane version: v1.34.1
	I1120 21:41:03.154387  884264 api_server.go:131] duration metric: took 16.242967ms to wait for apiserver health ...
	I1120 21:41:03.154396  884264 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:41:03.163795  884264 system_pods.go:59] 26 kube-system pods found
	I1120 21:41:03.163878  884264 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:41:03.163902  884264 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:41:03.163924  884264 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:41:03.163958  884264 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:41:03.163982  884264 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:41:03.164004  884264 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:41:03.164039  884264 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:41:03.164064  884264 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:41:03.164085  884264 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:41:03.164120  884264 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:41:03.164142  884264 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:41:03.164160  884264 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:41:03.164181  884264 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:41:03.164218  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:41:03.164236  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:41:03.164257  884264 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:41:03.164295  884264 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:41:03.164319  884264 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:41:03.164339  884264 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:41:03.164374  884264 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:41:03.164397  884264 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:41:03.164414  884264 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:41:03.164435  884264 system_pods.go:61] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:41:03.164470  884264 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:41:03.164490  884264 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:41:03.164510  884264 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:41:03.164542  884264 system_pods.go:74] duration metric: took 10.139581ms to wait for pod list to return data ...
	I1120 21:41:03.164569  884264 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:41:03.171615  884264 default_sa.go:45] found service account: "default"
	I1120 21:41:03.171638  884264 default_sa.go:55] duration metric: took 7.047374ms for default service account to be created ...
	I1120 21:41:03.171648  884264 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:41:03.265734  884264 system_pods.go:86] 26 kube-system pods found
	I1120 21:41:03.267572  884264 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:41:03.267646  884264 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:41:03.267710  884264 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:41:03.267791  884264 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:41:03.267818  884264 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:41:03.267839  884264 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:41:03.267876  884264 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:41:03.267901  884264 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:41:03.267953  884264 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:41:03.267979  884264 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:41:03.268035  884264 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:41:03.268061  884264 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:41:03.268111  884264 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:41:03.268136  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:41:03.268187  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:41:03.268216  884264 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:41:03.268276  884264 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:41:03.268345  884264 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:41:03.268371  884264 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:41:03.268391  884264 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:41:03.268432  884264 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:41:03.268515  884264 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:41:03.268541  884264 system_pods.go:89] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:41:03.268560  884264 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:41:03.269441  884264 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:41:03.269511  884264 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:41:03.269535  884264 system_pods.go:126] duration metric: took 97.879853ms to wait for k8s-apps to be running ...
	I1120 21:41:03.269960  884264 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:41:03.270187  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:41:03.292101  884264 system_svc.go:56] duration metric: took 22.131508ms WaitForService to wait for kubelet
	I1120 21:41:03.292181  884264 kubeadm.go:587] duration metric: took 6.866108619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:41:03.292218  884264 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:41:03.296374  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296410  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296423  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296428  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296434  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296439  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296443  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296447  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296452  884264 node_conditions.go:105] duration metric: took 4.198189ms to run NodePressure ...
	I1120 21:41:03.296468  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:41:03.296492  884264 start.go:256] writing updated cluster config ...
	I1120 21:41:03.300140  884264 out.go:203] 
	I1120 21:41:03.304344  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:03.304532  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:41:03.307946  884264 out.go:179] * Starting "ha-409851-m04" worker node in "ha-409851" cluster
	I1120 21:41:03.311732  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:41:03.314710  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:41:03.317785  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:41:03.317884  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:41:03.318031  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:41:03.318080  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:41:03.317859  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:41:03.318453  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:41:03.344793  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:41:03.344812  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:41:03.344825  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:41:03.344848  884264 start.go:360] acquireMachinesLock for ha-409851-m04: {Name:mk87280fc97adfe0461a2851d285457d7b179a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:41:03.344898  884264 start.go:364] duration metric: took 35.644µs to acquireMachinesLock for "ha-409851-m04"
	I1120 21:41:03.344917  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:41:03.344922  884264 fix.go:54] fixHost starting: m04
	I1120 21:41:03.345209  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:41:03.376330  884264 fix.go:112] recreateIfNeeded on ha-409851-m04: state=Stopped err=<nil>
	W1120 21:41:03.376356  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:41:03.379471  884264 out.go:252] * Restarting existing docker container for "ha-409851-m04" ...
	I1120 21:41:03.379560  884264 cli_runner.go:164] Run: docker start ha-409851-m04
	I1120 21:41:03.742042  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:41:03.769660  884264 kic.go:430] container "ha-409851-m04" state is running.
	I1120 21:41:03.770657  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:41:03.796776  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:41:03.797038  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:41:03.797104  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:03.823466  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:03.823770  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:03.823778  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:41:03.824435  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:41:06.970676  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:41:06.970701  884264 ubuntu.go:182] provisioning hostname "ha-409851-m04"
	I1120 21:41:06.970765  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:06.990700  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:06.991183  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:06.991203  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m04 && echo "ha-409851-m04" | sudo tee /etc/hostname
	I1120 21:41:07.146851  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:41:07.146933  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:07.166460  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:07.166767  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:07.166788  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:41:07.311657  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:41:07.311684  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:41:07.311699  884264 ubuntu.go:190] setting up certificates
	I1120 21:41:07.311712  884264 provision.go:84] configureAuth start
	I1120 21:41:07.311786  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:41:07.331035  884264 provision.go:143] copyHostCerts
	I1120 21:41:07.331091  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:41:07.331124  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:41:07.331136  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:41:07.331213  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:41:07.331298  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:41:07.331322  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:41:07.331326  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:41:07.331352  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:41:07.331393  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:41:07.331415  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:41:07.331422  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:41:07.331447  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:41:07.331497  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m04 san=[127.0.0.1 192.168.49.5 ha-409851-m04 localhost minikube]
	I1120 21:41:08.623164  884264 provision.go:177] copyRemoteCerts
	I1120 21:41:08.623237  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:41:08.623286  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:08.639718  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:08.747935  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:41:08.748002  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:41:08.773774  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:41:08.773840  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:41:08.801882  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:41:08.801944  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:41:08.828179  884264 provision.go:87] duration metric: took 1.516452919s to configureAuth
	I1120 21:41:08.828204  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:41:08.828439  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:08.828555  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:08.849615  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:08.849931  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:08.849949  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:41:09.190143  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:41:09.190166  884264 machine.go:97] duration metric: took 5.39311756s to provisionDockerMachine
	I1120 21:41:09.190177  884264 start.go:293] postStartSetup for "ha-409851-m04" (driver="docker")
	I1120 21:41:09.190190  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:41:09.190252  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:41:09.190297  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.211823  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.319209  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:41:09.323014  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:41:09.323048  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:41:09.323086  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:41:09.323159  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:41:09.323239  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:41:09.323252  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:41:09.323406  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:41:09.331751  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:41:09.350101  884264 start.go:296] duration metric: took 159.908044ms for postStartSetup
	I1120 21:41:09.350192  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:41:09.350244  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.368495  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.469917  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:41:09.475514  884264 fix.go:56] duration metric: took 6.130583533s for fixHost
	I1120 21:41:09.475537  884264 start.go:83] releasing machines lock for "ha-409851-m04", held for 6.130630836s
	I1120 21:41:09.475607  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:41:09.501255  884264 out.go:179] * Found network options:
	I1120 21:41:09.504338  884264 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1120 21:41:09.507242  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507285  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507296  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507328  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507344  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507354  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:41:09.507446  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:41:09.507499  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.507798  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:41:09.507867  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.541478  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.545988  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.688666  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:41:09.768175  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:41:09.768304  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:41:09.777453  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:41:09.777480  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:41:09.777528  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:41:09.777603  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:41:09.798578  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:41:09.812578  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:41:09.812674  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:41:09.835768  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:41:09.850693  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:41:10.028876  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:41:10.166862  884264 docker.go:234] disabling docker service ...
	I1120 21:41:10.166933  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:41:10.183999  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:41:10.199107  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:41:10.347931  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:41:10.487321  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:41:10.501617  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:41:10.518198  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:41:10.518277  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.527726  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:41:10.527803  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.539453  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.549501  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.558643  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:41:10.568755  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.581525  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.591524  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.602370  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:41:10.613570  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:41:10.624948  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:41:10.769380  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:41:10.965596  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:41:10.965735  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:41:10.970207  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:41:10.970330  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:41:10.974315  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:41:11.000434  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:41:11.000593  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:41:11.038585  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:41:11.076706  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:41:11.079567  884264 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:41:11.082644  884264 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:41:11.085633  884264 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1120 21:41:11.088629  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:41:11.108683  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:41:11.114419  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:41:11.127176  884264 mustload.go:66] Loading cluster: ha-409851
	I1120 21:41:11.127431  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:11.127709  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:41:11.147050  884264 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:41:11.147378  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.5
	I1120 21:41:11.147394  884264 certs.go:195] generating shared ca certs ...
	I1120 21:41:11.147409  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:41:11.147533  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:41:11.147578  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:41:11.147592  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:41:11.147607  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:41:11.147660  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:41:11.147683  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:41:11.147743  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:41:11.147786  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:41:11.147795  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:41:11.147820  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:41:11.147843  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:41:11.147871  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:41:11.147915  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:41:11.147959  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.147976  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.147989  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.148010  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:41:11.176245  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:41:11.195856  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:41:11.214613  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:41:11.238690  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:41:11.260518  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:41:11.281726  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:41:11.301862  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:41:11.308424  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.316198  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:41:11.324601  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.330531  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.330646  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.373994  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:41:11.382317  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.390537  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:41:11.399975  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.404118  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.404234  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.448070  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:41:11.457954  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.471564  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:41:11.480744  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.486391  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.486458  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.534970  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:41:11.543238  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:41:11.547092  884264 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:41:11.547139  884264 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 21:41:11.547290  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:41:11.547367  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:41:11.555116  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:41:11.555189  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 21:41:11.563262  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:41:11.578268  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:41:11.593301  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:41:11.598486  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:41:11.609343  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:41:11.746115  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:41:11.760921  884264 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 21:41:11.761346  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:11.764709  884264 out.go:179] * Verifying Kubernetes components...
	I1120 21:41:11.767650  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:41:11.914567  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:41:11.938460  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:41:11.938535  884264 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:41:11.938816  884264 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m04" to be "Ready" ...
	W1120 21:41:13.945651  884264 node_ready.go:57] node "ha-409851-m04" has "Ready":"Unknown" status (will retry)
	W1120 21:41:16.442900  884264 node_ready.go:57] node "ha-409851-m04" has "Ready":"Unknown" status (will retry)
	I1120 21:41:17.943857  884264 node_ready.go:49] node "ha-409851-m04" is "Ready"
	I1120 21:41:17.943887  884264 node_ready.go:38] duration metric: took 6.005051124s for node "ha-409851-m04" to be "Ready" ...
	I1120 21:41:17.943901  884264 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:41:17.943959  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:41:17.956954  884264 system_svc.go:56] duration metric: took 13.044338ms WaitForService to wait for kubelet
	I1120 21:41:17.956985  884264 kubeadm.go:587] duration metric: took 6.196020803s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:41:17.957003  884264 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:41:17.961298  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961332  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961343  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961348  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961353  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961357  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961361  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961364  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961369  884264 node_conditions.go:105] duration metric: took 4.361006ms to run NodePressure ...
	I1120 21:41:17.961388  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:41:17.961412  884264 start.go:256] writing updated cluster config ...
	I1120 21:41:17.961738  884264 ssh_runner.go:195] Run: rm -f paused
	I1120 21:41:17.965714  884264 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:41:17.966209  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:41:17.987930  884264 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:17.994206  884264 pod_ready.go:94] pod "coredns-66bc5c9577-pjk6c" is "Ready"
	I1120 21:41:17.994237  884264 pod_ready.go:86] duration metric: took 6.274933ms for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:17.994247  884264 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.000165  884264 pod_ready.go:94] pod "coredns-66bc5c9577-vfsp6" is "Ready"
	I1120 21:41:18.000193  884264 pod_ready.go:86] duration metric: took 5.93943ms for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.004504  884264 pod_ready.go:83] waiting for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.012659  884264 pod_ready.go:94] pod "etcd-ha-409851" is "Ready"
	I1120 21:41:18.012689  884264 pod_ready.go:86] duration metric: took 8.149311ms for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.012700  884264 pod_ready.go:83] waiting for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.020780  884264 pod_ready.go:94] pod "etcd-ha-409851-m02" is "Ready"
	I1120 21:41:18.020813  884264 pod_ready.go:86] duration metric: took 8.102492ms for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.020824  884264 pod_ready.go:83] waiting for pod "etcd-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.167216  884264 request.go:683] "Waited before sending request" delay="146.304273ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-409851-m03"
	I1120 21:41:18.366937  884264 request.go:683] "Waited before sending request" delay="196.339897ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:18.767349  884264 request.go:683] "Waited before sending request" delay="195.31892ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:19.167191  884264 request.go:683] "Waited before sending request" delay="142.259307ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	W1120 21:41:20.032402  884264 pod_ready.go:104] pod "etcd-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:22.528455  884264 pod_ready.go:104] pod "etcd-ha-409851-m03" is not "Ready", error: <nil>
	I1120 21:41:25.033882  884264 pod_ready.go:94] pod "etcd-ha-409851-m03" is "Ready"
	I1120 21:41:25.033912  884264 pod_ready.go:86] duration metric: took 7.013080383s for pod "etcd-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.040254  884264 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.053388  884264 pod_ready.go:94] pod "kube-apiserver-ha-409851" is "Ready"
	I1120 21:41:25.053485  884264 pod_ready.go:86] duration metric: took 13.116035ms for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.053512  884264 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.166598  884264 pod_ready.go:94] pod "kube-apiserver-ha-409851-m02" is "Ready"
	I1120 21:41:25.166678  884264 pod_ready.go:86] duration metric: took 113.122413ms for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.166704  884264 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.367416  884264 request.go:683] "Waited before sending request" delay="167.284948ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:25.394798  884264 pod_ready.go:94] pod "kube-apiserver-ha-409851-m03" is "Ready"
	I1120 21:41:25.394876  884264 pod_ready.go:86] duration metric: took 228.152279ms for pod "kube-apiserver-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.567359  884264 request.go:683] "Waited before sending request" delay="172.329236ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 21:41:25.572178  884264 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.768229  884264 request.go:683] "Waited before sending request" delay="195.205343ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851"
	I1120 21:41:25.966769  884264 request.go:683] "Waited before sending request" delay="194.270004ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:41:25.970209  884264 pod_ready.go:94] pod "kube-controller-manager-ha-409851" is "Ready"
	I1120 21:41:25.970236  884264 pod_ready.go:86] duration metric: took 398.02564ms for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.970246  884264 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:26.166647  884264 request.go:683] "Waited before sending request" delay="196.282354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m02"
	I1120 21:41:26.367492  884264 request.go:683] "Waited before sending request" delay="194.321944ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:41:26.370972  884264 pod_ready.go:94] pod "kube-controller-manager-ha-409851-m02" is "Ready"
	I1120 21:41:26.371028  884264 pod_ready.go:86] duration metric: took 400.775984ms for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:26.371038  884264 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:26.567360  884264 request.go:683] "Waited before sending request" delay="196.215941ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m03"
	I1120 21:41:26.766668  884264 request.go:683] "Waited before sending request" delay="195.346826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:26.966667  884264 request.go:683] "Waited before sending request" delay="95.147149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m03"
	I1120 21:41:27.167326  884264 request.go:683] "Waited before sending request" delay="196.326498ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:27.568613  884264 request.go:683] "Waited before sending request" delay="192.229084ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:27.966849  884264 request.go:683] "Waited before sending request" delay="91.23035ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	W1120 21:41:28.378730  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:30.379114  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:32.879033  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:35.379045  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:37.878241  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:40.378797  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:42.878559  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:45.379157  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:47.877869  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:49.881128  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:52.378869  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:54.878402  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:56.879168  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:59.386440  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:01.877608  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:04.379099  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:06.379677  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:08.385036  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:10.879345  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:13.378081  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:15.378210  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:17.878956  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:20.379087  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:22.392566  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:24.878081  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:26.878436  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:29.390304  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:31.877421  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:33.878206  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:35.878348  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:38.378256  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:40.378547  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:42.878117  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:44.878306  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:47.378856  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:49.379096  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:51.877443  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:53.877489  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:55.878600  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:57.878767  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:00.379377  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:02.878543  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:04.879548  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:07.377207  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:09.377567  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:11.379602  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:13.380062  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:15.878005  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:17.879034  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:20.380298  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:22.877944  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:24.878873  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:27.379047  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:29.380796  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:31.882322  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:34.378874  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:36.379099  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:38.379341  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:40.379731  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:42.877518  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:44.878086  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:46.878385  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:49.377786  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:51.378044  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:53.378300  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:55.878538  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:57.878669  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:59.882674  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:02.378956  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:04.879155  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:07.378530  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:09.878139  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:11.879593  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:14.377334  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:16.378277  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:18.381420  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:20.878229  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:22.878418  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:24.879069  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:27.377824  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:29.878048  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:31.878313  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:34.379581  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:36.877137  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:38.878394  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:40.878828  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:43.378176  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:45.878068  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:47.878425  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:49.878602  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:52.378582  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:54.878764  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:57.378027  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:59.381427  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:01.885697  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:04.378368  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:06.378472  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:08.389992  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:10.878206  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:13.377529  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:15.378711  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:17.877998  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	I1120 21:45:17.966316  884264 pod_ready.go:86] duration metric: took 3m51.595241121s for pod "kube-controller-manager-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:45:17.966353  884264 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 21:45:17.966368  884264 pod_ready.go:40] duration metric: took 4m0.000621775s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:45:17.969588  884264 out.go:203] 
	W1120 21:45:17.972643  884264 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 21:45:17.975633  884264 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-409851 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-409851
helpers_test.go:243: (dbg) docker inspect ha-409851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	        "Created": "2025-11-20T21:32:05.722530265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 884396,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:38:31.055844346Z",
	            "FinishedAt": "2025-11-20T21:38:30.436661317Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hostname",
	        "HostsPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hosts",
	        "LogPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853-json.log",
	        "Name": "/ha-409851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-409851:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-409851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	                "LowerDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-409851",
	                "Source": "/var/lib/docker/volumes/ha-409851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-409851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-409851",
	                "name.minikube.sigs.k8s.io": "ha-409851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8599a98b0ccff252f0c8c9aad9b46a3b9148a590bf903962ae9e74255b1d7bab",
	            "SandboxKey": "/var/run/docker/netns/8599a98b0ccf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33917"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-409851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:b7:48:6c:96:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad232b357b1bc65babf7a48f3581b00686ef0ccc0f86acee1a57f8a071f682f1",
	                    "EndpointID": "4581080836f9e1d498ecfc4ffb90702bf2c1e0bf832ae79ac8d4da9d8f193945",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-409851",
	                        "d20916d298c9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-409851 -n ha-409851
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 logs -n 25: (1.822676782s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-409851 cp ha-409851-m03:/home/docker/cp-test.txt ha-409851-m02:/home/docker/cp-test_ha-409851-m03_ha-409851-m02.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m02 sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851-m02.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ cp      │ ha-409851 cp ha-409851-m03:/home/docker/cp-test.txt ha-409851-m04:/home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ cp      │ ha-409851 cp testdata/cp-test.txt ha-409851-m04:/home/docker/cp-test.txt                                                            │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668750254/001/cp-test_ha-409851-m04.txt │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851:/home/docker/cp-test_ha-409851-m04_ha-409851.txt                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851.txt                                                │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m02:/home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m02 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m03:/home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node start m02 --alsologtostderr -v 5                                                                                     │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │                     │
	│ stop    │ ha-409851 stop --alsologtostderr -v 5                                                                                               │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:38 UTC │
	│ start   │ ha-409851 start --wait true --alsologtostderr -v 5                                                                                  │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:38 UTC │                     │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:38:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:38:30.769876  884264 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:38:30.770088  884264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:38:30.770114  884264 out.go:374] Setting ErrFile to fd 2...
	I1120 21:38:30.770133  884264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:38:30.770657  884264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:38:30.771309  884264 out.go:368] Setting JSON to false
	I1120 21:38:30.772185  884264 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15636,"bootTime":1763659075,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:38:30.772284  884264 start.go:143] virtualization:  
	I1120 21:38:30.775797  884264 out.go:179] * [ha-409851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:38:30.779473  884264 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:38:30.779630  884264 notify.go:221] Checking for updates...
	I1120 21:38:30.785039  884264 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:38:30.787825  884264 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:38:30.790672  884264 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:38:30.793534  884264 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:38:30.796313  884264 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:38:30.799725  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:30.799830  884264 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:38:30.836806  884264 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:38:30.836950  884264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:38:30.901769  884264 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:38:30.892669658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:38:30.901887  884264 docker.go:319] overlay module found
	I1120 21:38:30.904943  884264 out.go:179] * Using the docker driver based on existing profile
	I1120 21:38:30.907794  884264 start.go:309] selected driver: docker
	I1120 21:38:30.907812  884264 start.go:930] validating driver "docker" against &{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:38:30.907982  884264 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:38:30.908085  884264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:38:30.967881  884264 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:38:30.95851914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:38:30.968308  884264 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:38:30.968343  884264 cni.go:84] Creating CNI manager for ""
	I1120 21:38:30.968403  884264 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 21:38:30.968455  884264 start.go:353] cluster config:
	{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:38:30.971749  884264 out.go:179] * Starting "ha-409851" primary control-plane node in "ha-409851" cluster
	I1120 21:38:30.974680  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:38:30.977600  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:38:30.980407  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:38:30.980458  884264 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:38:30.980472  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:38:30.980485  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:38:30.980567  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:38:30.980578  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:38:30.980718  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:30.999616  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:38:30.999641  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:38:30.999654  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:38:30.999678  884264 start.go:360] acquireMachinesLock for ha-409851: {Name:mk8d4d263fd846febb903e54335147f9d639d302 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:38:30.999743  884264 start.go:364] duration metric: took 37.309µs to acquireMachinesLock for "ha-409851"
	I1120 21:38:30.999781  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:38:30.999790  884264 fix.go:54] fixHost starting: 
	I1120 21:38:31.000072  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:38:31.018393  884264 fix.go:112] recreateIfNeeded on ha-409851: state=Stopped err=<nil>
	W1120 21:38:31.018439  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:38:31.021858  884264 out.go:252] * Restarting existing docker container for "ha-409851" ...
	I1120 21:38:31.021974  884264 cli_runner.go:164] Run: docker start ha-409851
	I1120 21:38:31.304211  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:38:31.327776  884264 kic.go:430] container "ha-409851" state is running.
	I1120 21:38:31.328187  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:38:31.353945  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:31.354443  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:38:31.354512  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:31.382173  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:31.382524  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:31.382534  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:38:31.383289  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:38:34.531685  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:38:34.531763  884264 ubuntu.go:182] provisioning hostname "ha-409851"
	I1120 21:38:34.531863  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:34.551282  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:34.551609  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:34.551626  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851 && echo "ha-409851" | sudo tee /etc/hostname
	I1120 21:38:34.704765  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:38:34.704852  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:34.723366  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:34.723694  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:34.723717  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:38:34.867982  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:38:34.868025  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:38:34.868088  884264 ubuntu.go:190] setting up certificates
	I1120 21:38:34.868104  884264 provision.go:84] configureAuth start
	I1120 21:38:34.868188  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:38:34.887153  884264 provision.go:143] copyHostCerts
	I1120 21:38:34.887208  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:34.887270  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:38:34.887291  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:34.887383  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:38:34.887509  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:34.887538  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:38:34.887549  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:34.887584  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:38:34.887659  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:34.887686  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:38:34.887694  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:34.887724  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:38:34.887782  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851 san=[127.0.0.1 192.168.49.2 ha-409851 localhost minikube]
	I1120 21:38:35.400008  884264 provision.go:177] copyRemoteCerts
	I1120 21:38:35.400088  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:38:35.400141  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:35.418360  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:35.518831  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:38:35.518950  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:38:35.537804  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:38:35.537900  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1120 21:38:35.556580  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:38:35.556644  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:38:35.575458  884264 provision.go:87] duration metric: took 707.296985ms to configureAuth
	I1120 21:38:35.575487  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:38:35.575723  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:35.575844  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:35.594086  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:35.594409  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:35.594430  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:38:35.962817  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:38:35.962837  884264 machine.go:97] duration metric: took 4.608380541s to provisionDockerMachine
	I1120 21:38:35.962848  884264 start.go:293] postStartSetup for "ha-409851" (driver="docker")
	I1120 21:38:35.962859  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:38:35.962920  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:38:35.962989  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:35.984847  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.091216  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:38:36.094852  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:38:36.094880  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:38:36.094891  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:38:36.094947  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:38:36.095090  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:38:36.095099  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:38:36.095212  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:38:36.102846  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:36.120698  884264 start.go:296] duration metric: took 157.834355ms for postStartSetup
	I1120 21:38:36.120824  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:38:36.120914  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:36.138055  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.236342  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:38:36.241086  884264 fix.go:56] duration metric: took 5.241287155s for fixHost
	I1120 21:38:36.241113  884264 start.go:83] releasing machines lock for "ha-409851", held for 5.241354183s
	I1120 21:38:36.241193  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:38:36.259831  884264 ssh_runner.go:195] Run: cat /version.json
	I1120 21:38:36.259893  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:36.260152  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:38:36.260229  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:36.287560  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.292613  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.386937  884264 ssh_runner.go:195] Run: systemctl --version
	I1120 21:38:36.496830  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:38:36.537327  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:38:36.541923  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:38:36.542024  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:38:36.549865  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:38:36.549933  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:38:36.549983  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:38:36.550070  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:38:36.565179  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:38:36.578552  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:38:36.578675  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:38:36.594881  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:38:36.608683  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:38:36.731342  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:38:36.868669  884264 docker.go:234] disabling docker service ...
	I1120 21:38:36.868857  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:38:36.886109  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:38:36.900226  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:38:37.014736  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:38:37.144034  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:38:37.158890  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:38:37.173954  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:38:37.174053  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.183273  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:38:37.183345  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.192471  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.201342  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.210418  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:38:37.218694  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.227957  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.236515  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.245491  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:38:37.253272  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:38:37.260653  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:37.378780  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:38:37.568343  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:38:37.568517  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:38:37.572886  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:38:37.572998  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:38:37.576787  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:38:37.603768  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:38:37.603878  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:37.634707  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:37.668026  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:38:37.670996  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:38:37.688086  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:38:37.692097  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:37.702318  884264 kubeadm.go:884] updating cluster {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:38:37.702473  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:38:37.702533  884264 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:38:37.738810  884264 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:38:37.738882  884264 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:38:37.739011  884264 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:38:37.764274  884264 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:38:37.764295  884264 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:38:37.764305  884264 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 21:38:37.764401  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:38:37.764481  884264 ssh_runner.go:195] Run: crio config
	I1120 21:38:37.825630  884264 cni.go:84] Creating CNI manager for ""
	I1120 21:38:37.825661  884264 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 21:38:37.825685  884264 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:38:37.825743  884264 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-409851 NodeName:ha-409851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:38:37.825905  884264 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-409851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:38:37.825931  884264 kube-vip.go:115] generating kube-vip config ...
	I1120 21:38:37.825986  884264 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:38:37.839066  884264 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:38:37.839175  884264 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:38:37.839248  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:38:37.847133  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:38:37.847235  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 21:38:37.855412  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 21:38:37.868477  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:38:37.881823  884264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1120 21:38:37.895195  884264 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:38:37.908845  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:38:37.912943  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:37.923133  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:38.049716  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:38:38.067155  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.2
	I1120 21:38:38.067178  884264 certs.go:195] generating shared ca certs ...
	I1120 21:38:38.067197  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:38.067386  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:38:38.067464  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:38:38.067494  884264 certs.go:257] generating profile certs ...
	I1120 21:38:38.067639  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:38:38.067683  884264 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56
	I1120 21:38:38.067722  884264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1120 21:38:38.134399  884264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56 ...
	I1120 21:38:38.134432  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56: {Name:mk7acbd3c6c1dd357ee45d74f751ed3339a8f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:38.134668  884264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56 ...
	I1120 21:38:38.134693  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56: {Name:mkd0412497c04b2292f00ce455371fa1840c4bc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:38.134834  884264 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt
	I1120 21:38:38.135032  884264 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key
	I1120 21:38:38.135229  884264 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:38:38.135248  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:38:38.135280  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:38:38.135304  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:38:38.135321  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:38:38.135350  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:38:38.135384  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:38:38.135407  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:38:38.135423  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:38:38.135493  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:38:38.135556  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:38:38.135571  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:38:38.135614  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:38:38.135660  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:38:38.135691  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:38:38.135764  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:38.135818  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.135841  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.135858  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.136478  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:38:38.161386  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:38:38.183426  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:38:38.209571  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:38:38.230449  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:38:38.269189  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:38:38.290285  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:38:38.310366  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:38:38.336702  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:38:38.356298  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:38:38.377772  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:38:38.397354  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:38:38.410774  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:38:38.417590  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.426055  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:38:38.435256  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.442057  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.442128  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.484356  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:38:38.492206  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.499992  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:38:38.507965  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.512359  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.512476  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.554117  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:38:38.562052  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.569885  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:38:38.578289  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.582380  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.582505  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.624140  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:38:38.633756  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:38:38.637748  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:38:38.679477  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:38:38.725454  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:38:38.767445  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:38:38.816551  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:38:38.874060  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:38:38.945404  884264 kubeadm.go:401] StartCluster: {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:38:38.945592  884264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:38:38.945702  884264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:38:39.035653  884264 cri.go:89] found id: "5c78de3db456c35c2eafd8be0e59c965664f006cb3e9b19c4d9b05b81ab079fc"
	I1120 21:38:39.035728  884264 cri.go:89] found id: "be96e9e3ffb4708dccf24988f485136e1039f591a2e9c93edef5d830431fa080"
	I1120 21:38:39.035748  884264 cri.go:89] found id: "b40d2cfd438a8dc3a5f89de00510928701b9ef1887f2f4f9055a3978ea2197fa"
	I1120 21:38:39.035769  884264 cri.go:89] found id: "696b700dcb568291344392af5fbbff9e59bb78b02bbbf2fa18e2156bab42fae1"
	I1120 21:38:39.035804  884264 cri.go:89] found id: "bbe2aa5c20be55307484a6dc5e0cf27f1adb8b5e2bad7448657394d0153a3e84"
	I1120 21:38:39.035846  884264 cri.go:89] found id: ""
	I1120 21:38:39.035929  884264 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:38:39.060419  884264 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:38:39Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:38:39.060556  884264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:38:39.074901  884264 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:38:39.074968  884264 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:38:39.075123  884264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:38:39.088673  884264 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:38:39.089259  884264 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-409851" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:38:39.089441  884264 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "ha-409851" cluster setting kubeconfig missing "ha-409851" context setting]
	I1120 21:38:39.089845  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:39.090518  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:38:39.091335  884264 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 21:38:39.091424  884264 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 21:38:39.091402  884264 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 21:38:39.091527  884264 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 21:38:39.091559  884264 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 21:38:39.091579  884264 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 21:38:39.091949  884264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:38:39.104395  884264 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 21:38:39.104468  884264 kubeadm.go:602] duration metric: took 29.411064ms to restartPrimaryControlPlane
	I1120 21:38:39.104495  884264 kubeadm.go:403] duration metric: took 159.115003ms to StartCluster
	I1120 21:38:39.104539  884264 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:39.104635  884264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:38:39.105401  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:39.105666  884264 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:38:39.105723  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:38:39.105753  884264 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:38:39.106516  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:39.111744  884264 out.go:179] * Enabled addons: 
	I1120 21:38:39.114735  884264 addons.go:515] duration metric: took 8.971082ms for enable addons: enabled=[]
	I1120 21:38:39.114834  884264 start.go:247] waiting for cluster config update ...
	I1120 21:38:39.114858  884264 start.go:256] writing updated cluster config ...
	I1120 21:38:39.118409  884264 out.go:203] 
	I1120 21:38:39.121722  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:39.121897  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:39.125210  884264 out.go:179] * Starting "ha-409851-m02" control-plane node in "ha-409851" cluster
	I1120 21:38:39.128166  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:38:39.131274  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:38:39.134220  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:38:39.134243  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:38:39.134349  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:38:39.134358  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:38:39.134481  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:39.134707  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:38:39.163368  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:38:39.163387  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:38:39.163399  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:38:39.163424  884264 start.go:360] acquireMachinesLock for ha-409851-m02: {Name:mka809540f7c511f76e83dac3b1218011243fbec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:38:39.163475  884264 start.go:364] duration metric: took 37.473µs to acquireMachinesLock for "ha-409851-m02"
	I1120 21:38:39.163495  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:38:39.163500  884264 fix.go:54] fixHost starting: m02
	I1120 21:38:39.163761  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:38:39.188597  884264 fix.go:112] recreateIfNeeded on ha-409851-m02: state=Stopped err=<nil>
	W1120 21:38:39.188621  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:38:39.197319  884264 out.go:252] * Restarting existing docker container for "ha-409851-m02" ...
	I1120 21:38:39.197414  884264 cli_runner.go:164] Run: docker start ha-409851-m02
	I1120 21:38:39.580228  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:38:39.619726  884264 kic.go:430] container "ha-409851-m02" state is running.
	I1120 21:38:39.620289  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:38:39.645172  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:39.645452  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:38:39.645526  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:39.670151  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:39.670895  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:39.670954  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:38:39.671692  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44478->127.0.0.1:33922: read: connection reset by peer
	I1120 21:38:42.978516  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:38:42.978591  884264 ubuntu.go:182] provisioning hostname "ha-409851-m02"
	I1120 21:38:42.978693  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:43.005096  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:43.005433  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:43.005447  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m02 && echo "ha-409851-m02" | sudo tee /etc/hostname
	I1120 21:38:43.320783  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:38:43.320866  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:43.374875  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:43.375237  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:43.375260  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:38:43.620767  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:38:43.620794  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:38:43.620810  884264 ubuntu.go:190] setting up certificates
	I1120 21:38:43.620821  884264 provision.go:84] configureAuth start
	I1120 21:38:43.620881  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:38:43.659411  884264 provision.go:143] copyHostCerts
	I1120 21:38:43.659453  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:43.659485  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:38:43.659493  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:43.659567  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:38:43.659644  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:43.659661  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:38:43.659665  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:43.659690  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:38:43.659728  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:43.659743  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:38:43.659747  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:43.659768  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:38:43.659814  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m02 san=[127.0.0.1 192.168.49.3 ha-409851-m02 localhost minikube]
	I1120 21:38:44.333480  884264 provision.go:177] copyRemoteCerts
	I1120 21:38:44.333555  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:38:44.333605  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:44.352064  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:44.461767  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:38:44.461834  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:38:44.500018  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:38:44.500084  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:38:44.547484  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:38:44.547557  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:38:44.596357  884264 provision.go:87] duration metric: took 975.522241ms to configureAuth
	I1120 21:38:44.596401  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:38:44.596654  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:44.596788  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:44.624344  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:44.624651  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:44.624670  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:38:45.322074  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:38:45.322113  884264 machine.go:97] duration metric: took 5.676650753s to provisionDockerMachine
	I1120 21:38:45.322128  884264 start.go:293] postStartSetup for "ha-409851-m02" (driver="docker")
	I1120 21:38:45.322141  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:38:45.322226  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:38:45.322277  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.342731  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:45.453499  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:38:45.470888  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:38:45.470938  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:38:45.470950  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:38:45.471014  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:38:45.471096  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:38:45.471109  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:38:45.471214  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:38:45.489273  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:45.556457  884264 start.go:296] duration metric: took 234.311564ms for postStartSetup
	I1120 21:38:45.556611  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:38:45.556676  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.587707  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:45.729685  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:38:45.740986  884264 fix.go:56] duration metric: took 6.577477813s for fixHost
	I1120 21:38:45.741008  884264 start.go:83] releasing machines lock for "ha-409851-m02", held for 6.577525026s
	I1120 21:38:45.741083  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:38:45.771820  884264 out.go:179] * Found network options:
	I1120 21:38:45.774905  884264 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 21:38:45.777764  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:38:45.777810  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:38:45.777890  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:38:45.777942  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.778213  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:38:45.778264  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.814965  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:45.816280  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:46.130838  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:38:46.136697  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:38:46.136780  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:38:46.154525  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:38:46.154562  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:38:46.154596  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:38:46.154657  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:38:46.179167  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:38:46.198207  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:38:46.198285  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:38:46.220547  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:38:46.238372  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:38:46.474214  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:38:46.692069  884264 docker.go:234] disabling docker service ...
	I1120 21:38:46.692151  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:38:46.711611  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:38:46.733293  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:38:46.937783  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:38:47.161295  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:38:47.177649  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:38:47.196405  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:38:47.196499  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.211080  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:38:47.211159  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.226280  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.241556  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.251537  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:38:47.263194  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.279048  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.292565  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.305383  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:38:47.318266  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:38:47.330851  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:47.572162  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:38:47.826907  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:38:47.827027  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:38:47.830650  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:38:47.830757  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:38:47.834471  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:38:47.858658  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:38:47.858770  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:47.887568  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:47.924184  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:38:47.927160  884264 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:38:47.930191  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:38:47.947316  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:38:47.951294  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:47.961645  884264 mustload.go:66] Loading cluster: ha-409851
	I1120 21:38:47.961891  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:47.962176  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:38:47.978704  884264 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:38:47.979070  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.3
	I1120 21:38:47.979083  884264 certs.go:195] generating shared ca certs ...
	I1120 21:38:47.979100  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:47.979221  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:38:47.979265  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:38:47.979275  884264 certs.go:257] generating profile certs ...
	I1120 21:38:47.979366  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:38:47.979435  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.36974727
	I1120 21:38:47.979478  884264 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:38:47.979491  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:38:47.979505  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:38:47.979525  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:38:47.979536  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:38:47.979550  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:38:47.979561  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:38:47.979576  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:38:47.979587  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:38:47.979641  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:38:47.979672  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:38:47.979689  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:38:47.979713  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:38:47.979738  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:38:47.979762  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:38:47.979804  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:47.979840  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:38:47.979855  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:38:47.979869  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:47.979929  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:47.996700  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:48.095431  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:38:48.099410  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:38:48.107940  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:38:48.111757  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:38:48.120021  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:38:48.123592  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:38:48.132027  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:38:48.135667  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:38:48.143707  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:38:48.147064  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:38:48.155777  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:38:48.159326  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:38:48.168074  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:38:48.187052  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:38:48.204261  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:38:48.222484  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:38:48.239999  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:38:48.257750  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:38:48.275489  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:38:48.293203  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:38:48.310644  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:38:48.333442  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:38:48.353223  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:38:48.371976  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:38:48.384868  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:38:48.397625  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:38:48.410587  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:38:48.423732  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:38:48.437291  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:38:48.449732  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:38:48.462200  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:38:48.468726  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.476219  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:38:48.483790  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.487957  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.488071  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.529603  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:38:48.541715  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.551230  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:38:48.560557  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.566086  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.566214  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.614556  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:38:48.622341  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.630607  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:38:48.638692  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.642390  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.642458  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.683660  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:38:48.692961  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:38:48.697105  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:38:48.738157  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:38:48.779134  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:38:48.820771  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:38:48.861964  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:38:48.903079  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:38:48.946240  884264 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 21:38:48.946401  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:38:48.946432  884264 kube-vip.go:115] generating kube-vip config ...
	I1120 21:38:48.946494  884264 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:38:48.959247  884264 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:38:48.959318  884264 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:38:48.959400  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:38:48.967383  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:38:48.967482  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:38:48.975230  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:38:48.988715  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:38:49.001843  884264 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:38:49.019090  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:38:49.023118  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:49.034137  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:49.154884  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:38:49.169065  884264 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:38:49.169534  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:49.173571  884264 out.go:179] * Verifying Kubernetes components...
	I1120 21:38:49.176570  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:49.315404  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:38:49.329975  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:38:49.330049  884264 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:38:49.330298  884264 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m02" to be "Ready" ...
	W1120 21:38:59.331759  884264 node_ready.go:55] error getting node "ha-409851-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-409851-m02": net/http: TLS handshake timeout
	I1120 21:39:02.652543  884264 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-409851-m02"
	W1120 21:39:12.654218  884264 node_ready.go:55] error getting node "ha-409851-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-409851-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.49.1:48284->192.168.49.2:8443: read: connection reset by peer
	I1120 21:39:13.752634  884264 node_ready.go:49] node "ha-409851-m02" is "Ready"
	I1120 21:39:13.752662  884264 node_ready.go:38] duration metric: took 24.422335125s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:39:13.752675  884264 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:39:13.752734  884264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:39:13.802621  884264 api_server.go:72] duration metric: took 24.633509474s to wait for apiserver process to appear ...
	I1120 21:39:13.802644  884264 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:39:13.802666  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:13.846540  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:39:13.846565  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:39:14.303057  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:14.317076  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:14.317121  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:14.803756  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:14.835165  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:14.835252  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:15.302766  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:15.327917  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:15.327996  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:15.802846  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:15.844402  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:15.844486  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:16.302774  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:16.349139  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:39:16.355368  884264 api_server.go:141] control plane version: v1.34.1
	I1120 21:39:16.355451  884264 api_server.go:131] duration metric: took 2.552797549s to wait for apiserver health ...
	I1120 21:39:16.355475  884264 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:39:16.388991  884264 system_pods.go:59] 26 kube-system pods found
	I1120 21:39:16.389076  884264 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:39:16.389097  884264 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:39:16.389116  884264 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:39:16.389161  884264 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:39:16.389188  884264 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:39:16.389225  884264 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:39:16.389248  884264 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:39:16.389268  884264 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:39:16.389303  884264 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:39:16.389327  884264 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:39:16.389347  884264 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:39:16.389382  884264 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:39:16.389405  884264 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:39:16.389426  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:39:16.389462  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:39:16.389484  884264 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:39:16.389501  884264 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:39:16.389582  884264 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:39:16.389609  884264 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:39:16.389631  884264 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:39:16.389670  884264 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:39:16.389696  884264 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:39:16.389718  884264 system_pods.go:61] "kube-vip-ha-409851" [714ee0ad-584f-4bd3-b031-cc6e2485512c] Running
	I1120 21:39:16.389753  884264 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:39:16.389791  884264 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:39:16.389812  884264 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:39:16.389848  884264 system_pods.go:74] duration metric: took 34.353977ms to wait for pod list to return data ...
	I1120 21:39:16.389871  884264 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:39:16.416752  884264 default_sa.go:45] found service account: "default"
	I1120 21:39:16.416829  884264 default_sa.go:55] duration metric: took 26.934653ms for default service account to be created ...
	I1120 21:39:16.416854  884264 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:39:16.495655  884264 system_pods.go:86] 26 kube-system pods found
	I1120 21:39:16.495738  884264 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:39:16.495762  884264 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:39:16.495799  884264 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:39:16.495829  884264 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:39:16.495850  884264 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:39:16.495891  884264 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:39:16.495919  884264 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:39:16.495943  884264 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:39:16.495976  884264 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:39:16.496003  884264 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:39:16.496027  884264 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:39:16.496065  884264 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:39:16.496119  884264 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:39:16.496154  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:39:16.496175  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:39:16.496206  884264 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:39:16.496230  884264 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:39:16.496253  884264 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:39:16.496290  884264 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:39:16.496316  884264 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:39:16.496339  884264 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:39:16.496376  884264 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:39:16.496404  884264 system_pods.go:89] "kube-vip-ha-409851" [714ee0ad-584f-4bd3-b031-cc6e2485512c] Running
	I1120 21:39:16.496424  884264 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:39:16.496462  884264 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:39:16.496488  884264 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:39:16.496514  884264 system_pods.go:126] duration metric: took 79.640825ms to wait for k8s-apps to be running ...
	I1120 21:39:16.496549  884264 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:39:16.496649  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:39:16.525131  884264 system_svc.go:56] duration metric: took 28.572383ms WaitForService to wait for kubelet
	I1120 21:39:16.525221  884264 kubeadm.go:587] duration metric: took 27.356113948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:39:16.525256  884264 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:39:16.547500  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547592  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547622  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547645  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547686  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547706  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547727  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547760  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547787  884264 node_conditions.go:105] duration metric: took 22.508874ms to run NodePressure ...
	I1120 21:39:16.547814  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:39:16.547869  884264 start.go:256] writing updated cluster config ...
	I1120 21:39:16.551433  884264 out.go:203] 
	I1120 21:39:16.554880  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:39:16.555111  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:39:16.558694  884264 out.go:179] * Starting "ha-409851-m03" control-plane node in "ha-409851" cluster
	I1120 21:39:16.562364  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:39:16.565426  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:39:16.568528  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:39:16.568640  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:39:16.568611  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:39:16.568996  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:39:16.569028  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:39:16.569191  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:39:16.590195  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:39:16.590214  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:39:16.590225  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:39:16.590248  884264 start.go:360] acquireMachinesLock for ha-409851-m03: {Name:mkdc61c72ab6a67582f9ee213a06b683b619e587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:39:16.590297  884264 start.go:364] duration metric: took 34.011µs to acquireMachinesLock for "ha-409851-m03"
	I1120 21:39:16.590316  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:39:16.590321  884264 fix.go:54] fixHost starting: m03
	I1120 21:39:16.590574  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m03 --format={{.State.Status}}
	I1120 21:39:16.615086  884264 fix.go:112] recreateIfNeeded on ha-409851-m03: state=Stopped err=<nil>
	W1120 21:39:16.615115  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:39:16.618135  884264 out.go:252] * Restarting existing docker container for "ha-409851-m03" ...
	I1120 21:39:16.618225  884264 cli_runner.go:164] Run: docker start ha-409851-m03
	I1120 21:39:16.978914  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m03 --format={{.State.Status}}
	I1120 21:39:17.006894  884264 kic.go:430] container "ha-409851-m03" state is running.
	I1120 21:39:17.007317  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:39:17.038413  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:39:17.038674  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:39:17.038742  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:17.068281  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:17.068584  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:17.068592  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:39:17.070869  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:39:20.309993  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m03
	
	I1120 21:39:20.310063  884264 ubuntu.go:182] provisioning hostname "ha-409851-m03"
	I1120 21:39:20.310163  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:20.336716  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:20.337029  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:20.337043  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m03 && echo "ha-409851-m03" | sudo tee /etc/hostname
	I1120 21:39:20.816264  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m03
	
	I1120 21:39:20.816432  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:20.846177  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:20.846510  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:20.846531  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:39:21.112630  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:39:21.112715  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:39:21.112747  884264 ubuntu.go:190] setting up certificates
	I1120 21:39:21.112788  884264 provision.go:84] configureAuth start
	I1120 21:39:21.112872  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:39:21.141385  884264 provision.go:143] copyHostCerts
	I1120 21:39:21.141425  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:39:21.141458  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:39:21.141465  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:39:21.141537  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:39:21.141610  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:39:21.141626  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:39:21.141631  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:39:21.141657  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:39:21.141696  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:39:21.141713  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:39:21.141717  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:39:21.141739  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:39:21.141793  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m03 san=[127.0.0.1 192.168.49.4 ha-409851-m03 localhost minikube]
	I1120 21:39:21.285547  884264 provision.go:177] copyRemoteCerts
	I1120 21:39:21.285659  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:39:21.285756  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:21.304352  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:21.419419  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:39:21.419479  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:39:21.455413  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:39:21.455471  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:39:21.499343  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:39:21.499449  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:39:21.553711  884264 provision.go:87] duration metric: took 440.893582ms to configureAuth
	I1120 21:39:21.553743  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:39:21.553979  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:39:21.554094  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:21.579157  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:21.579463  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:21.579484  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:39:22.222733  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:39:22.222764  884264 machine.go:97] duration metric: took 5.184080337s to provisionDockerMachine
	I1120 21:39:22.222784  884264 start.go:293] postStartSetup for "ha-409851-m03" (driver="docker")
	I1120 21:39:22.222795  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:39:22.222869  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:39:22.222949  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.258502  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.366087  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:39:22.370384  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:39:22.370464  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:39:22.370490  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:39:22.370582  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:39:22.370714  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:39:22.370740  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:39:22.370890  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:39:22.380356  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:39:22.405408  884264 start.go:296] duration metric: took 182.600947ms for postStartSetup
	I1120 21:39:22.405514  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:39:22.405570  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.425307  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.524350  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:39:22.529911  884264 fix.go:56] duration metric: took 5.939581904s for fixHost
	I1120 21:39:22.529937  884264 start.go:83] releasing machines lock for "ha-409851-m03", held for 5.939631735s
	I1120 21:39:22.530012  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:39:22.551424  884264 out.go:179] * Found network options:
	I1120 21:39:22.560397  884264 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 21:39:22.563475  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:39:22.563504  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:39:22.563526  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:39:22.563536  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:39:22.563629  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:39:22.563664  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:39:22.563687  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.563722  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.593348  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.599158  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.850591  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:39:22.957812  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:39:22.957885  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:39:22.971629  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:39:22.971651  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:39:22.971683  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:39:22.971740  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:39:22.992266  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:39:23.017885  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:39:23.018003  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:39:23.047686  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:39:23.071594  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:39:23.341231  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:39:23.618998  884264 docker.go:234] disabling docker service ...
	I1120 21:39:23.619120  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:39:23.641818  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:39:23.676773  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:39:23.963173  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:39:24.189401  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:39:24.206793  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:39:24.222800  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:39:24.222943  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.233205  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:39:24.233339  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.242572  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.252400  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.262758  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:39:24.283691  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.293195  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.301843  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.310942  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:39:24.319806  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:39:24.328026  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:39:24.598997  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:40:54.919407  884264 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320335625s)
	I1120 21:40:54.919437  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:40:54.919501  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:40:54.923827  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:40:54.923896  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:40:54.927766  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:40:54.956875  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:40:54.956961  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:40:54.989990  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:40:55.031599  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:40:55.034874  884264 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:40:55.042500  884264 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:40:55.050091  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:40:55.084630  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:40:55.090169  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:40:55.103094  884264 mustload.go:66] Loading cluster: ha-409851
	I1120 21:40:55.103394  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:40:55.103694  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:40:55.127072  884264 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:40:55.127420  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.4
	I1120 21:40:55.127444  884264 certs.go:195] generating shared ca certs ...
	I1120 21:40:55.127465  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:40:55.127604  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:40:55.127650  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:40:55.127662  884264 certs.go:257] generating profile certs ...
	I1120 21:40:55.127765  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:40:55.127891  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.b859e16b
	I1120 21:40:55.127933  884264 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:40:55.127943  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:40:55.127956  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:40:55.127969  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:40:55.127980  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:40:55.127992  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:40:55.128006  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:40:55.128033  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:40:55.128045  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:40:55.128112  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:40:55.128145  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:40:55.128160  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:40:55.128187  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:40:55.128214  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:40:55.128241  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:40:55.128290  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:40:55.128326  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.128344  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.128357  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.128426  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:40:55.150727  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:40:55.251340  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:40:55.256433  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:40:55.266784  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:40:55.270534  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:40:55.279775  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:40:55.284275  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:40:55.294321  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:40:55.298684  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:40:55.307319  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:40:55.310734  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:40:55.319458  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:40:55.323063  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:40:55.331533  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:40:55.350148  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:40:55.371874  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:40:55.394257  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:40:55.416142  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:40:55.436749  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:40:55.457715  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:40:55.490155  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:40:55.512635  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:40:55.534827  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:40:55.566135  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:40:55.588247  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:40:55.601998  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:40:55.617348  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:40:55.631678  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:40:55.644956  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:40:55.658910  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:40:55.674549  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:40:55.689850  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:40:55.697169  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.706702  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:40:55.715708  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.719673  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.719798  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.761953  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:40:55.770722  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.779665  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:40:55.796200  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.800339  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.800460  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.842260  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:40:55.849720  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.857782  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:40:55.865998  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.870179  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.870265  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.917536  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:40:55.925307  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:40:55.929384  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:40:55.971056  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:40:56.013165  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:40:56.055581  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:40:56.098307  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:40:56.140587  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:40:56.181956  884264 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1120 21:40:56.182053  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:40:56.182091  884264 kube-vip.go:115] generating kube-vip config ...
	I1120 21:40:56.182144  884264 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:40:56.195065  884264 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:40:56.195123  884264 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:40:56.195188  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:40:56.203155  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:40:56.203249  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:40:56.210881  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:40:56.226182  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:40:56.241370  884264 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:40:56.258633  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:40:56.262629  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:40:56.274206  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:40:56.407402  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:40:56.425980  884264 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:40:56.426593  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:40:56.429208  884264 out.go:179] * Verifying Kubernetes components...
	I1120 21:40:56.432088  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:40:56.603926  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:40:56.618659  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:40:56.618769  884264 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:40:56.619068  884264 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m03" to be "Ready" ...
	W1120 21:40:58.623454  884264 node_ready.go:57] node "ha-409851-m03" has "Ready":"Unknown" status (will retry)
	W1120 21:41:00.623718  884264 node_ready.go:57] node "ha-409851-m03" has "Ready":"Unknown" status (will retry)
	I1120 21:41:03.122881  884264 node_ready.go:49] node "ha-409851-m03" is "Ready"
	I1120 21:41:03.122915  884264 node_ready.go:38] duration metric: took 6.503802683s for node "ha-409851-m03" to be "Ready" ...
	I1120 21:41:03.122931  884264 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:41:03.123035  884264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:41:03.138113  884264 api_server.go:72] duration metric: took 6.712035257s to wait for apiserver process to appear ...
	I1120 21:41:03.138137  884264 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:41:03.138156  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:41:03.152932  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:41:03.154364  884264 api_server.go:141] control plane version: v1.34.1
	I1120 21:41:03.154387  884264 api_server.go:131] duration metric: took 16.242967ms to wait for apiserver health ...
	I1120 21:41:03.154396  884264 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:41:03.163795  884264 system_pods.go:59] 26 kube-system pods found
	I1120 21:41:03.163878  884264 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:41:03.163902  884264 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:41:03.163924  884264 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:41:03.163958  884264 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:41:03.163982  884264 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:41:03.164004  884264 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:41:03.164039  884264 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:41:03.164064  884264 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:41:03.164085  884264 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:41:03.164120  884264 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:41:03.164142  884264 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:41:03.164160  884264 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:41:03.164181  884264 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:41:03.164218  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:41:03.164236  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:41:03.164257  884264 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:41:03.164295  884264 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:41:03.164319  884264 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:41:03.164339  884264 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:41:03.164374  884264 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:41:03.164397  884264 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:41:03.164414  884264 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:41:03.164435  884264 system_pods.go:61] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:41:03.164470  884264 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:41:03.164490  884264 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:41:03.164510  884264 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:41:03.164542  884264 system_pods.go:74] duration metric: took 10.139581ms to wait for pod list to return data ...
	I1120 21:41:03.164569  884264 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:41:03.171615  884264 default_sa.go:45] found service account: "default"
	I1120 21:41:03.171638  884264 default_sa.go:55] duration metric: took 7.047374ms for default service account to be created ...
	I1120 21:41:03.171648  884264 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:41:03.265734  884264 system_pods.go:86] 26 kube-system pods found
	I1120 21:41:03.267572  884264 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:41:03.267646  884264 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:41:03.267710  884264 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:41:03.267791  884264 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:41:03.267818  884264 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:41:03.267839  884264 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:41:03.267876  884264 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:41:03.267901  884264 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:41:03.267953  884264 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:41:03.267979  884264 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:41:03.268035  884264 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:41:03.268061  884264 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:41:03.268111  884264 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:41:03.268136  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:41:03.268187  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:41:03.268216  884264 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:41:03.268276  884264 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:41:03.268345  884264 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:41:03.268371  884264 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:41:03.268391  884264 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:41:03.268432  884264 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:41:03.268515  884264 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:41:03.268541  884264 system_pods.go:89] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:41:03.268560  884264 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:41:03.269441  884264 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:41:03.269511  884264 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:41:03.269535  884264 system_pods.go:126] duration metric: took 97.879853ms to wait for k8s-apps to be running ...
	I1120 21:41:03.269960  884264 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:41:03.270187  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:41:03.292101  884264 system_svc.go:56] duration metric: took 22.131508ms WaitForService to wait for kubelet
	I1120 21:41:03.292181  884264 kubeadm.go:587] duration metric: took 6.866108619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:41:03.292218  884264 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:41:03.296374  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296410  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296423  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296428  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296434  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296439  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296443  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296447  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296452  884264 node_conditions.go:105] duration metric: took 4.198189ms to run NodePressure ...
	I1120 21:41:03.296468  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:41:03.296492  884264 start.go:256] writing updated cluster config ...
	I1120 21:41:03.300140  884264 out.go:203] 
	I1120 21:41:03.304344  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:03.304532  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:41:03.307946  884264 out.go:179] * Starting "ha-409851-m04" worker node in "ha-409851" cluster
	I1120 21:41:03.311732  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:41:03.314710  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:41:03.317785  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:41:03.317884  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:41:03.318031  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:41:03.318080  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:41:03.317859  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:41:03.318453  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:41:03.344793  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:41:03.344812  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:41:03.344825  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:41:03.344848  884264 start.go:360] acquireMachinesLock for ha-409851-m04: {Name:mk87280fc97adfe0461a2851d285457d7b179a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:41:03.344898  884264 start.go:364] duration metric: took 35.644µs to acquireMachinesLock for "ha-409851-m04"
	I1120 21:41:03.344917  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:41:03.344922  884264 fix.go:54] fixHost starting: m04
	I1120 21:41:03.345209  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:41:03.376330  884264 fix.go:112] recreateIfNeeded on ha-409851-m04: state=Stopped err=<nil>
	W1120 21:41:03.376356  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:41:03.379471  884264 out.go:252] * Restarting existing docker container for "ha-409851-m04" ...
	I1120 21:41:03.379560  884264 cli_runner.go:164] Run: docker start ha-409851-m04
	I1120 21:41:03.742042  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:41:03.769660  884264 kic.go:430] container "ha-409851-m04" state is running.
	I1120 21:41:03.770657  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:41:03.796776  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:41:03.797038  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:41:03.797104  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:03.823466  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:03.823770  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:03.823778  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:41:03.824435  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:41:06.970676  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:41:06.970701  884264 ubuntu.go:182] provisioning hostname "ha-409851-m04"
	I1120 21:41:06.970765  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:06.990700  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:06.991183  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:06.991203  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m04 && echo "ha-409851-m04" | sudo tee /etc/hostname
	I1120 21:41:07.146851  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:41:07.146933  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:07.166460  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:07.166767  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:07.166788  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:41:07.311657  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:41:07.311684  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:41:07.311699  884264 ubuntu.go:190] setting up certificates
	I1120 21:41:07.311712  884264 provision.go:84] configureAuth start
	I1120 21:41:07.311786  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:41:07.331035  884264 provision.go:143] copyHostCerts
	I1120 21:41:07.331091  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:41:07.331124  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:41:07.331136  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:41:07.331213  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:41:07.331298  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:41:07.331322  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:41:07.331326  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:41:07.331352  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:41:07.331393  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:41:07.331415  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:41:07.331422  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:41:07.331447  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:41:07.331497  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m04 san=[127.0.0.1 192.168.49.5 ha-409851-m04 localhost minikube]
	I1120 21:41:08.623164  884264 provision.go:177] copyRemoteCerts
	I1120 21:41:08.623237  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:41:08.623286  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:08.639718  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:08.747935  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:41:08.748002  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:41:08.773774  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:41:08.773840  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:41:08.801882  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:41:08.801944  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:41:08.828179  884264 provision.go:87] duration metric: took 1.516452919s to configureAuth
	I1120 21:41:08.828204  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:41:08.828439  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:08.828555  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:08.849615  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:08.849931  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:08.849949  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:41:09.190143  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:41:09.190166  884264 machine.go:97] duration metric: took 5.39311756s to provisionDockerMachine
	I1120 21:41:09.190177  884264 start.go:293] postStartSetup for "ha-409851-m04" (driver="docker")
	I1120 21:41:09.190190  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:41:09.190252  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:41:09.190297  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.211823  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.319209  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:41:09.323014  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:41:09.323048  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:41:09.323086  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:41:09.323159  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:41:09.323239  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:41:09.323252  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:41:09.323406  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:41:09.331751  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:41:09.350101  884264 start.go:296] duration metric: took 159.908044ms for postStartSetup
	I1120 21:41:09.350192  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:41:09.350244  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.368495  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.469917  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:41:09.475514  884264 fix.go:56] duration metric: took 6.130583533s for fixHost
	I1120 21:41:09.475537  884264 start.go:83] releasing machines lock for "ha-409851-m04", held for 6.130630836s
	I1120 21:41:09.475607  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:41:09.501255  884264 out.go:179] * Found network options:
	I1120 21:41:09.504338  884264 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1120 21:41:09.507242  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507285  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507296  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507328  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507344  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507354  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:41:09.507446  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:41:09.507499  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.507798  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:41:09.507867  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.541478  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.545988  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.688666  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:41:09.768175  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:41:09.768304  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:41:09.777453  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:41:09.777480  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:41:09.777528  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:41:09.777603  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:41:09.798578  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:41:09.812578  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:41:09.812674  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:41:09.835768  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:41:09.850693  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:41:10.028876  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:41:10.166862  884264 docker.go:234] disabling docker service ...
	I1120 21:41:10.166933  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:41:10.183999  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:41:10.199107  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:41:10.347931  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:41:10.487321  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:41:10.501617  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:41:10.518198  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:41:10.518277  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.527726  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:41:10.527803  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.539453  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.549501  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.558643  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:41:10.568755  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.581525  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.591524  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.602370  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:41:10.613570  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:41:10.624948  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:41:10.769380  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:41:10.965596  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:41:10.965735  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:41:10.970207  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:41:10.970330  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:41:10.974315  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:41:11.000434  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:41:11.000593  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:41:11.038585  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:41:11.076706  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:41:11.079567  884264 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:41:11.082644  884264 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:41:11.085633  884264 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1120 21:41:11.088629  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:41:11.108683  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:41:11.114419  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:41:11.127176  884264 mustload.go:66] Loading cluster: ha-409851
	I1120 21:41:11.127431  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:11.127709  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:41:11.147050  884264 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:41:11.147378  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.5
	I1120 21:41:11.147394  884264 certs.go:195] generating shared ca certs ...
	I1120 21:41:11.147409  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:41:11.147533  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:41:11.147578  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:41:11.147592  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:41:11.147607  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:41:11.147660  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:41:11.147683  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:41:11.147743  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:41:11.147786  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:41:11.147795  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:41:11.147820  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:41:11.147843  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:41:11.147871  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:41:11.147915  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:41:11.147959  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.147976  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.147989  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.148010  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:41:11.176245  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:41:11.195856  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:41:11.214613  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:41:11.238690  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:41:11.260518  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:41:11.281726  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:41:11.301862  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:41:11.308424  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.316198  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:41:11.324601  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.330531  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.330646  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.373994  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:41:11.382317  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.390537  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:41:11.399975  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.404118  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.404234  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.448070  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:41:11.457954  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.471564  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:41:11.480744  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.486391  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.486458  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.534970  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:41:11.543238  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:41:11.547092  884264 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:41:11.547139  884264 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 21:41:11.547290  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:41:11.547367  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:41:11.555116  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:41:11.555189  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 21:41:11.563262  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:41:11.578268  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:41:11.593301  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:41:11.598486  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:41:11.609343  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:41:11.746115  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:41:11.760921  884264 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 21:41:11.761346  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:11.764709  884264 out.go:179] * Verifying Kubernetes components...
	I1120 21:41:11.767650  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:41:11.914567  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:41:11.938460  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:41:11.938535  884264 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:41:11.938816  884264 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m04" to be "Ready" ...
	W1120 21:41:13.945651  884264 node_ready.go:57] node "ha-409851-m04" has "Ready":"Unknown" status (will retry)
	W1120 21:41:16.442900  884264 node_ready.go:57] node "ha-409851-m04" has "Ready":"Unknown" status (will retry)
	I1120 21:41:17.943857  884264 node_ready.go:49] node "ha-409851-m04" is "Ready"
	I1120 21:41:17.943887  884264 node_ready.go:38] duration metric: took 6.005051124s for node "ha-409851-m04" to be "Ready" ...
	I1120 21:41:17.943901  884264 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:41:17.943959  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:41:17.956954  884264 system_svc.go:56] duration metric: took 13.044338ms WaitForService to wait for kubelet
	I1120 21:41:17.956985  884264 kubeadm.go:587] duration metric: took 6.196020803s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:41:17.957003  884264 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:41:17.961298  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961332  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961343  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961348  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961353  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961357  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961361  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961364  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961369  884264 node_conditions.go:105] duration metric: took 4.361006ms to run NodePressure ...
	I1120 21:41:17.961388  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:41:17.961412  884264 start.go:256] writing updated cluster config ...
	I1120 21:41:17.961738  884264 ssh_runner.go:195] Run: rm -f paused
	I1120 21:41:17.965714  884264 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:41:17.966209  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:41:17.987930  884264 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:17.994206  884264 pod_ready.go:94] pod "coredns-66bc5c9577-pjk6c" is "Ready"
	I1120 21:41:17.994237  884264 pod_ready.go:86] duration metric: took 6.274933ms for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:17.994247  884264 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.000165  884264 pod_ready.go:94] pod "coredns-66bc5c9577-vfsp6" is "Ready"
	I1120 21:41:18.000193  884264 pod_ready.go:86] duration metric: took 5.93943ms for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.004504  884264 pod_ready.go:83] waiting for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.012659  884264 pod_ready.go:94] pod "etcd-ha-409851" is "Ready"
	I1120 21:41:18.012689  884264 pod_ready.go:86] duration metric: took 8.149311ms for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.012700  884264 pod_ready.go:83] waiting for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.020780  884264 pod_ready.go:94] pod "etcd-ha-409851-m02" is "Ready"
	I1120 21:41:18.020813  884264 pod_ready.go:86] duration metric: took 8.102492ms for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.020824  884264 pod_ready.go:83] waiting for pod "etcd-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.167216  884264 request.go:683] "Waited before sending request" delay="146.304273ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-409851-m03"
	I1120 21:41:18.366937  884264 request.go:683] "Waited before sending request" delay="196.339897ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:18.767349  884264 request.go:683] "Waited before sending request" delay="195.31892ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:19.167191  884264 request.go:683] "Waited before sending request" delay="142.259307ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	W1120 21:41:20.032402  884264 pod_ready.go:104] pod "etcd-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:22.528455  884264 pod_ready.go:104] pod "etcd-ha-409851-m03" is not "Ready", error: <nil>
	I1120 21:41:25.033882  884264 pod_ready.go:94] pod "etcd-ha-409851-m03" is "Ready"
	I1120 21:41:25.033912  884264 pod_ready.go:86] duration metric: took 7.013080383s for pod "etcd-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.040254  884264 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.053388  884264 pod_ready.go:94] pod "kube-apiserver-ha-409851" is "Ready"
	I1120 21:41:25.053485  884264 pod_ready.go:86] duration metric: took 13.116035ms for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.053512  884264 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.166598  884264 pod_ready.go:94] pod "kube-apiserver-ha-409851-m02" is "Ready"
	I1120 21:41:25.166678  884264 pod_ready.go:86] duration metric: took 113.122413ms for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.166704  884264 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.367416  884264 request.go:683] "Waited before sending request" delay="167.284948ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:25.394798  884264 pod_ready.go:94] pod "kube-apiserver-ha-409851-m03" is "Ready"
	I1120 21:41:25.394876  884264 pod_ready.go:86] duration metric: took 228.152279ms for pod "kube-apiserver-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.567359  884264 request.go:683] "Waited before sending request" delay="172.329236ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 21:41:25.572178  884264 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.768229  884264 request.go:683] "Waited before sending request" delay="195.205343ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851"
	I1120 21:41:25.966769  884264 request.go:683] "Waited before sending request" delay="194.270004ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:41:25.970209  884264 pod_ready.go:94] pod "kube-controller-manager-ha-409851" is "Ready"
	I1120 21:41:25.970236  884264 pod_ready.go:86] duration metric: took 398.02564ms for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.970246  884264 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:26.166647  884264 request.go:683] "Waited before sending request" delay="196.282354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m02"
	I1120 21:41:26.367492  884264 request.go:683] "Waited before sending request" delay="194.321944ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:41:26.370972  884264 pod_ready.go:94] pod "kube-controller-manager-ha-409851-m02" is "Ready"
	I1120 21:41:26.371028  884264 pod_ready.go:86] duration metric: took 400.775984ms for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:26.371038  884264 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:26.567360  884264 request.go:683] "Waited before sending request" delay="196.215941ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m03"
	I1120 21:41:26.766668  884264 request.go:683] "Waited before sending request" delay="195.346826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:26.966667  884264 request.go:683] "Waited before sending request" delay="95.147149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m03"
	I1120 21:41:27.167326  884264 request.go:683] "Waited before sending request" delay="196.326498ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:27.568613  884264 request.go:683] "Waited before sending request" delay="192.229084ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:27.966849  884264 request.go:683] "Waited before sending request" delay="91.23035ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	W1120 21:41:28.378730  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:30.379114  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:32.879033  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:35.379045  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:37.878241  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:40.378797  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:42.878559  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:45.379157  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:47.877869  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:49.881128  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:52.378869  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:54.878402  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:56.879168  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:59.386440  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:01.877608  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:04.379099  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:06.379677  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:08.385036  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:10.879345  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:13.378081  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:15.378210  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:17.878956  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:20.379087  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:22.392566  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:24.878081  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:26.878436  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:29.390304  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:31.877421  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:33.878206  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:35.878348  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:38.378256  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:40.378547  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:42.878117  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:44.878306  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:47.378856  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:49.379096  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:51.877443  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:53.877489  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:55.878600  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:57.878767  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:00.379377  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:02.878543  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:04.879548  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:07.377207  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:09.377567  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:11.379602  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:13.380062  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:15.878005  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:17.879034  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:20.380298  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:22.877944  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:24.878873  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:27.379047  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:29.380796  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:31.882322  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:34.378874  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:36.379099  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:38.379341  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:40.379731  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:42.877518  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:44.878086  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:46.878385  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:49.377786  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:51.378044  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:53.378300  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:55.878538  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:57.878669  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:59.882674  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:02.378956  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:04.879155  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:07.378530  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:09.878139  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:11.879593  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:14.377334  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:16.378277  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:18.381420  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:20.878229  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:22.878418  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:24.879069  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:27.377824  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:29.878048  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:31.878313  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:34.379581  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:36.877137  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:38.878394  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:40.878828  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:43.378176  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:45.878068  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:47.878425  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:49.878602  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:52.378582  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:54.878764  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:57.378027  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:59.381427  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:01.885697  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:04.378368  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:06.378472  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:08.389992  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:10.878206  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:13.377529  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:15.378711  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:17.877998  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	I1120 21:45:17.966316  884264 pod_ready.go:86] duration metric: took 3m51.595241121s for pod "kube-controller-manager-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:45:17.966353  884264 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 21:45:17.966368  884264 pod_ready.go:40] duration metric: took 4m0.000621775s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:45:17.969588  884264 out.go:203] 
	W1120 21:45:17.972643  884264 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 21:45:17.975633  884264 out.go:203] 
	
	
	==> CRI-O <==
	Nov 20 21:39:20 ha-409851 crio[669]: time="2025-11-20T21:39:20.249764629Z" level=info msg="Started container" PID=1236 containerID=e8fdabfa9a8b8aa91fe261bccd17d97129ae2a6b35505d477696e70753cdb6b7 description=kube-system/coredns-66bc5c9577-vfsp6/coredns id=74568f7d-6558-4ed8-91f1-68f1990c30b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=42485995e8876f34db7501ec41a59804a4ed9ae2116ef9d43f971450342dbf13
	Nov 20 21:39:49 ha-409851 conmon[1114]: conmon 21c3c6a6f55d40a36bf5 <ninfo>: container 1116 exited with status 1
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.632784625Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d0fd39ea-fa77-479d-b191-90503a9b28fb name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.633994314Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=21e2e835-8254-4694-a7aa-72fd4afb923a name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.6395122Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b15bfc0f-5310-494c-ac34-54e5ad11a7d8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.639630371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.644264636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.644498854Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/56db3570444b73799d70709773076eebd0890ab60259066f030c2205355ff337/merged/etc/passwd: no such file or directory"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.644520434Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/56db3570444b73799d70709773076eebd0890ab60259066f030c2205355ff337/merged/etc/group: no such file or directory"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.644774427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.679571848Z" level=info msg="Created container a4b68b4348d44ef2a900f09b3024dca5482c2a4de323b2dcae2bd89dbddd6f31: kube-system/storage-provisioner/storage-provisioner" id=b15bfc0f-5310-494c-ac34-54e5ad11a7d8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.680406648Z" level=info msg="Starting container: a4b68b4348d44ef2a900f09b3024dca5482c2a4de323b2dcae2bd89dbddd6f31" id=0c9c0665-a074-4f39-884e-1de941f1ab50 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.682871835Z" level=info msg="Started container" PID=1415 containerID=a4b68b4348d44ef2a900f09b3024dca5482c2a4de323b2dcae2bd89dbddd6f31 description=kube-system/storage-provisioner/storage-provisioner id=0c9c0665-a074-4f39-884e-1de941f1ab50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1797f15844d53106be53db5c9d3fd3975292a67047660798629ddeadf54d83bb
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.268138893Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.332837674Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.333104484Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.333245335Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.378097346Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.378136591Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.378166631Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.386110048Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.386371451Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.386801782Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.391938158Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.391990917Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	a4b68b4348d44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   1797f15844d53       storage-provisioner                 kube-system
	e8fdabfa9a8b8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   42485995e8876       coredns-66bc5c9577-vfsp6            kube-system
	2d712803661e1       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   4ca111ef4be62       busybox-7b57f96db7-mgvhj            default
	64d8739737a07       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   2896dc90c65df       coredns-66bc5c9577-pjk6c            kube-system
	d0e16d539ff71       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   4b383895c0d77       kube-proxy-4qqxh                    kube-system
	4a54d0081476a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   84b5e44666140       kindnet-7hmbf                       kube-system
	21c3c6a6f55d4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   1797f15844d53       storage-provisioner                 kube-system
	59d058da43a3d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Running             kube-controller-manager   2                   43b1b9d53686c       kube-controller-manager-ha-409851   kube-system
	386fda302f30f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            2                   ee26925111068       kube-apiserver-ha-409851            kube-system
	5c78de3db456c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      1                   88b09f2bac280       etcd-ha-409851                      kube-system
	be96e9e3ffb47       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            1                   8637dd7ca13e1       kube-scheduler-ha-409851            kube-system
	b40d2cfd438a8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            1                   ee26925111068       kube-apiserver-ha-409851            kube-system
	696b700dcb568       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Running             kube-vip                  0                   8537a8d9a1f65       kube-vip-ha-409851                  kube-system
	bbe2aa5c20be5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   1                   43b1b9d53686c       kube-controller-manager-ha-409851   kube-system
	
	
	==> coredns [64d8739737a078f7c00d99f881554e80533e8bfccd6b2cfc10dcc615416aee55] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55521 - 34960 "HINFO IN 1541082872970593707.3686323008074576518. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01805291s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e8fdabfa9a8b8aa91fe261bccd17d97129ae2a6b35505d477696e70753cdb6b7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37858 - 46462 "HINFO IN 2122825953572513070.5747140387215178598. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022452217s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-409851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_32_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:32:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:45:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:43:59 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:43:59 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:43:59 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:43:59 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-409851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                1f114e92-c1bf-4c10-9121-0a6c185877b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mgvhj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 coredns-66bc5c9577-pjk6c             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 coredns-66bc5c9577-vfsp6             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-ha-409851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-7hmbf                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-409851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-409851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4qqxh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-409851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-409851                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m58s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-409851 status is now: NodeReady
	  Normal   RegisteredNode           10m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   Starting                 6m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m41s (x8 over 6m41s)  kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m41s (x8 over 6m41s)  kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m41s (x8 over 6m41s)  kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m58s                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           5m30s                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	
	
	Name:               ha-409851-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_33_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:45:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:42:48 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:42:48 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:42:48 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:42:48 +0000   Thu, 20 Nov 2025 21:34:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-409851-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3904cc8f-d8d1-4880-8dca-3fb5e1048dff
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hqh2f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 etcd-ha-409851-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-56lr8                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-409851-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-409851-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-pz7vt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-409851-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-409851-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 6m2s                   kube-proxy       
	  Normal   Starting                 7m13s                  kube-proxy       
	  Normal   RegisteredNode           11m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Warning  CgroupV1                 8m1s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m1s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m (x8 over 8m1s)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m (x8 over 8m1s)      kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m (x8 over 8m1s)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m22s                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   Starting                 6m39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m39s (x8 over 6m39s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m39s (x8 over 6m39s)  kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m39s (x8 over 6m39s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m59s                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           5m31s                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	
	
	Name:               ha-409851-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_34_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:34:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:45:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:45:17 +0000   Thu, 20 Nov 2025 21:41:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:45:17 +0000   Thu, 20 Nov 2025 21:41:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:45:17 +0000   Thu, 20 Nov 2025 21:41:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:45:17 +0000   Thu, 20 Nov 2025 21:41:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-409851-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                32235347-67b8-46f4-a0a9-5b30c9cc319c
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wfkjx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 etcd-ha-409851-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-27z7m                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-409851-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-409851-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-jh55s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-409851-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-409851-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 10m                  kube-proxy       
	  Normal   Starting                 3m54s                kube-proxy       
	  Normal   RegisteredNode           10m                  node-controller  Node ha-409851-m03 event: Registered Node ha-409851-m03 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-409851-m03 event: Registered Node ha-409851-m03 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-409851-m03 event: Registered Node ha-409851-m03 in Controller
	  Normal   RegisteredNode           7m22s                node-controller  Node ha-409851-m03 event: Registered Node ha-409851-m03 in Controller
	  Warning  CgroupV1                 6m1s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 6m1s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node ha-409851-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node ha-409851-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m1s (x8 over 6m1s)  kubelet          Node ha-409851-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m59s                node-controller  Node ha-409851-m03 event: Registered Node ha-409851-m03 in Controller
	  Normal   RegisteredNode           5m31s                node-controller  Node ha-409851-m03 event: Registered Node ha-409851-m03 in Controller
	  Normal   NodeNotReady             5m8s                 node-controller  Node ha-409851-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        5m1s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	
	
	Name:               ha-409851-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_35_59_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:45:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:43:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:43:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:43:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:43:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-409851-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                2c1b4976-2a70-4f78-8646-ed9804d613b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2d5r9       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m21s
	  kube-system                 kube-proxy-xnhl6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m54s                  kube-proxy       
	  Normal   Starting                 9m18s                  kube-proxy       
	  Warning  CgroupV1                 9m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    9m21s (x3 over 9m21s)  kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m21s (x3 over 9m21s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  9m21s (x3 over 9m21s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m19s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           9m18s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           9m18s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeReady                8m39s                  kubelet          Node ha-409851-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m22s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           5m59s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           5m31s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeNotReady             5m9s                   node-controller  Node ha-409851-m04 status is now: NodeNotReady
	  Normal   Starting                 4m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m16s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m12s (x8 over 4m16s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m12s (x8 over 4m16s)  kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m12s (x8 over 4m16s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Nov20 19:51] overlayfs: idmapped layers are currently not supported
	[ +26.087379] overlayfs: idmapped layers are currently not supported
	[Nov20 19:52] overlayfs: idmapped layers are currently not supported
	[Nov20 19:53] overlayfs: idmapped layers are currently not supported
	[  +2.035111] overlayfs: idmapped layers are currently not supported
	[Nov20 19:54] overlayfs: idmapped layers are currently not supported
	[Nov20 19:55] overlayfs: idmapped layers are currently not supported
	[Nov20 19:56] overlayfs: idmapped layers are currently not supported
	[Nov20 19:57] overlayfs: idmapped layers are currently not supported
	[Nov20 19:58] overlayfs: idmapped layers are currently not supported
	[Nov20 19:59] overlayfs: idmapped layers are currently not supported
	[Nov20 20:04] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:08] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:11] overlayfs: idmapped layers are currently not supported
	[Nov20 21:17] overlayfs: idmapped layers are currently not supported
	[Nov20 21:18] overlayfs: idmapped layers are currently not supported
	[Nov20 21:32] overlayfs: idmapped layers are currently not supported
	[Nov20 21:33] overlayfs: idmapped layers are currently not supported
	[Nov20 21:34] overlayfs: idmapped layers are currently not supported
	[Nov20 21:36] overlayfs: idmapped layers are currently not supported
	[Nov20 21:37] overlayfs: idmapped layers are currently not supported
	[Nov20 21:38] overlayfs: idmapped layers are currently not supported
	[  +3.034217] overlayfs: idmapped layers are currently not supported
	[Nov20 21:39] overlayfs: idmapped layers are currently not supported
	[Nov20 21:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5c78de3db456c35c2eafd8be0e59c965664f006cb3e9b19c4d9b05b81ab079fc] <==
	{"level":"warn","ts":"2025-11-20T21:40:39.566448Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"13577a22751ca4e7","rtt":"2.765126ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:39.567250Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"13577a22751ca4e7","rtt":"10.069518ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:42.005586Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"13577a22751ca4e7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:42.005652Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"13577a22751ca4e7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:44.567536Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"13577a22751ca4e7","rtt":"2.765126ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:44.567524Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"13577a22751ca4e7","rtt":"10.069518ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:46.007972Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"13577a22751ca4e7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:46.008033Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"13577a22751ca4e7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:49.568072Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"13577a22751ca4e7","rtt":"10.069518ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:49.568086Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"13577a22751ca4e7","rtt":"2.765126ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:50.009800Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"13577a22751ca4e7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:50.009864Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"13577a22751ca4e7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:54.011922Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"13577a22751ca4e7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:54.011985Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"13577a22751ca4e7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:54.568717Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"13577a22751ca4e7","rtt":"2.765126ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:54.568728Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"13577a22751ca4e7","rtt":"10.069518ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-20T21:40:58.333369Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"13577a22751ca4e7","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-20T21:40:58.333499Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:40:58.333538Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:40:58.400223Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"13577a22751ca4e7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-20T21:40:58.400266Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:40:58.413381Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:40:58.415661Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"warn","ts":"2025-11-20T21:40:59.570209Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"13577a22751ca4e7","rtt":"10.069518ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:59.570288Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"13577a22751ca4e7","rtt":"2.765126ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 21:45:20 up  4:27,  0 user,  load average: 0.19, 0.94, 1.33
	Linux ha-409851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a54d0081476a29dc91465df41cda7c5c9c2cb8309fda4632546728f61e59cf6] <==
	I1120 21:44:50.262228       1 main.go:324] Node ha-409851-m03 has CIDR [10.244.2.0/24] 
	I1120 21:45:00.261934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:45:00.261993       1 main.go:301] handling current node
	I1120 21:45:00.262012       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:45:00.262019       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:45:00.262599       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 21:45:00.262720       1 main.go:324] Node ha-409851-m03 has CIDR [10.244.2.0/24] 
	I1120 21:45:00.263453       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:45:00.263542       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:45:10.265604       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:45:10.265641       1 main.go:301] handling current node
	I1120 21:45:10.265658       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:45:10.265666       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:45:10.265822       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 21:45:10.265836       1 main.go:324] Node ha-409851-m03 has CIDR [10.244.2.0/24] 
	I1120 21:45:10.265893       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:45:10.265905       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:45:20.261614       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:45:20.261650       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:45:20.261783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:45:20.261799       1 main.go:301] handling current node
	I1120 21:45:20.261815       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:45:20.261822       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:45:20.261880       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 21:45:20.261891       1 main.go:324] Node ha-409851-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [386fda302f30f7ebb1d4d339166cc1ec54dfa445272705792165e6163d57744c] <==
	I1120 21:39:13.787909       1 controller.go:90] Starting OpenAPI V3 controller
	I1120 21:39:13.788159       1 naming_controller.go:299] Starting NamingConditionController
	I1120 21:39:13.788226       1 establishing_controller.go:81] Starting EstablishingController
	I1120 21:39:13.788279       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1120 21:39:13.788325       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1120 21:39:13.788371       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1120 21:39:13.913965       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:39:13.940203       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:39:13.945914       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:39:13.946009       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:39:13.946800       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:39:13.946821       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:39:13.948219       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:39:13.948249       1 policy_source.go:240] refreshing policies
	W1120 21:39:13.950492       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1120 21:39:13.951945       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:39:13.969005       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1120 21:39:13.978030       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1120 21:39:13.992776       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:39:14.268877       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1120 21:39:16.440486       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1120 21:39:19.450109       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:39:22.002455       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:39:22.120006       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:39:22.146838       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [b40d2cfd438a8dc3a5f89de00510928701b9ef1887f2f4f9055a3978ea2197fa] <==
	I1120 21:38:39.115874       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1120 21:38:41.567494       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1120 21:38:41.567533       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1120 21:38:41.567541       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1120 21:38:41.567547       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1120 21:38:41.567551       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1120 21:38:41.567556       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1120 21:38:41.567561       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1120 21:38:41.567565       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1120 21:38:41.567569       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1120 21:38:41.567574       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1120 21:38:41.567578       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1120 21:38:41.567582       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1120 21:38:41.597999       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1120 21:38:41.599390       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1120 21:38:41.607075       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1120 21:38:41.623950       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:38:41.639375       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1120 21:38:41.639482       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1120 21:38:41.639814       1 instance.go:239] Using reconciler: lease
	W1120 21:38:41.641190       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1120 21:39:01.597873       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1120 21:39:01.599901       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W1120 21:39:01.641307       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1120 21:39:01.641306       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [59d058da43a3deb02cebe99d92bd9fea5f53c1d0e1d4781459318e9f5ec8e02b] <==
	I1120 21:39:21.917257       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:39:21.917268       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:39:21.927999       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:39:21.928513       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:39:21.928883       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 21:39:21.928907       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:39:21.934278       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:39:21.934649       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:39:21.934706       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:39:21.943813       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:39:21.943872       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:39:21.944918       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:39:21.944985       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:39:21.945073       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851"
	I1120 21:39:21.945122       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m02"
	I1120 21:39:21.945144       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m03"
	I1120 21:39:21.945173       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m04"
	I1120 21:39:21.945196       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:39:21.946153       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 21:39:21.955276       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:39:21.955492       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:39:21.955493       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:39:21.969284       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:39:21.977060       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:41:17.714744       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-409851-m04"
	
	
	==> kube-controller-manager [bbe2aa5c20be55307484a6dc5e0cf27f1adb8b5e2bad7448657394d0153a3e84] <==
	I1120 21:38:41.548098       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:38:44.614759       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1120 21:38:44.618354       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:38:44.620563       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1120 21:38:44.622306       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1120 21:38:44.623227       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1120 21:38:44.624940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 21:39:13.636467       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [d0e16d539ff71abab806825801bb28f583fae27f1d711dac09b9ccaed9935625] <==
	I1120 21:39:19.834696       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:39:20.701909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:39:20.836637       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:39:20.836803       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 21:39:20.836987       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:39:21.023295       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:39:21.077884       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:39:21.318794       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:39:21.319212       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:39:21.327635       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:39:21.329660       1 config.go:200] "Starting service config controller"
	I1120 21:39:21.329733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:39:21.329805       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:39:21.329839       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:39:21.329876       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:39:21.329902       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:39:21.334052       1 config.go:309] "Starting node config controller"
	I1120 21:39:21.334154       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:39:21.334188       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:39:21.430611       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:39:21.430705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:39:21.430721       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [be96e9e3ffb4708dccf24988f485136e1039f591a2e9c93edef5d830431fa080] <==
	I1120 21:39:12.673026       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:39:12.684224       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:39:12.690790       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:39:12.690836       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:39:12.712397       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1120 21:39:13.686244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:39:13.686326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:39:13.686370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:39:13.686413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:39:13.686453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:39:13.686492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:39:13.686533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:39:13.686596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:39:13.686639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:39:13.686681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:39:13.686732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:39:13.686764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:39:13.686799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:39:13.686879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:39:13.686923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:39:13.687046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:39:13.695222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:39:13.730211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:39:13.852932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1120 21:39:15.213191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.244029     809 apiserver.go:52] "Watching apiserver"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.253241     809 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-409851" podUID="714ee0ad-584f-4bd3-b031-cc6e2485512c"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.308093     809 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-409851"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.308346     809 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-409851"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.334825     809 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 21:39:19 ha-409851 kubelet[809]: E1120 21:39:19.335285     809 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-vip-ha-409851\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="6f4588d400318593d47cec16914af85c" pod="kube-system/kube-vip-ha-409851"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.413889     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f7683fa-0199-444f-bcf4-42666203c1fa-xtables-lock\") pod \"kube-proxy-4qqxh\" (UID: \"2f7683fa-0199-444f-bcf4-42666203c1fa\") " pod="kube-system/kube-proxy-4qqxh"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414105     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f7683fa-0199-444f-bcf4-42666203c1fa-lib-modules\") pod \"kube-proxy-4qqxh\" (UID: \"2f7683fa-0199-444f-bcf4-42666203c1fa\") " pod="kube-system/kube-proxy-4qqxh"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414260     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/562945a4-84ec-46c8-b77e-abdd9d577c9c-xtables-lock\") pod \"kindnet-7hmbf\" (UID: \"562945a4-84ec-46c8-b77e-abdd9d577c9c\") " pod="kube-system/kindnet-7hmbf"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414418     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/562945a4-84ec-46c8-b77e-abdd9d577c9c-cni-cfg\") pod \"kindnet-7hmbf\" (UID: \"562945a4-84ec-46c8-b77e-abdd9d577c9c\") " pod="kube-system/kindnet-7hmbf"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414532     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/562945a4-84ec-46c8-b77e-abdd9d577c9c-lib-modules\") pod \"kindnet-7hmbf\" (UID: \"562945a4-84ec-46c8-b77e-abdd9d577c9c\") " pod="kube-system/kindnet-7hmbf"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414721     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/349c85dc-6341-43ab-b388-8734d72e3040-tmp\") pod \"storage-provisioner\" (UID: \"349c85dc-6341-43ab-b388-8734d72e3040\") " pod="kube-system/storage-provisioner"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.538354     809 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:39:19 ha-409851 kubelet[809]: W1120 21:39:19.595455     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-1797f15844d53106be53db5c9d3fd3975292a67047660798629ddeadf54d83bb WatchSource:0}: Error finding container 1797f15844d53106be53db5c9d3fd3975292a67047660798629ddeadf54d83bb: Status 404 returned error can't find the container with id 1797f15844d53106be53db5c9d3fd3975292a67047660798629ddeadf54d83bb
	Nov 20 21:39:19 ha-409851 kubelet[809]: W1120 21:39:19.619580     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-84b5e4466614067d6d89104ea9dd7c5ccc7fe8930c1a9f35a249ed3c331e30ea WatchSource:0}: Error finding container 84b5e4466614067d6d89104ea9dd7c5ccc7fe8930c1a9f35a249ed3c331e30ea: Status 404 returned error can't find the container with id 84b5e4466614067d6d89104ea9dd7c5ccc7fe8930c1a9f35a249ed3c331e30ea
	Nov 20 21:39:19 ha-409851 kubelet[809]: W1120 21:39:19.914004     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-2896dc90c65dfca1af86e02c677c9e2879bd0ad714d3c947dfa45ff146f61367 WatchSource:0}: Error finding container 2896dc90c65dfca1af86e02c677c9e2879bd0ad714d3c947dfa45ff146f61367: Status 404 returned error can't find the container with id 2896dc90c65dfca1af86e02c677c9e2879bd0ad714d3c947dfa45ff146f61367
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.931964     809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-409851" podStartSLOduration=0.931934967 podStartE2EDuration="931.934967ms" podCreationTimestamp="2025-11-20 21:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:39:19.931574117 +0000 UTC m=+41.856119838" watchObservedRunningTime="2025-11-20 21:39:19.931934967 +0000 UTC m=+41.856480688"
	Nov 20 21:39:19 ha-409851 kubelet[809]: W1120 21:39:19.976075     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-4ca111ef4be62d78c7a1ed21e6a44df07dbf900d08c75258fb1b742e4a65334a WatchSource:0}: Error finding container 4ca111ef4be62d78c7a1ed21e6a44df07dbf900d08c75258fb1b742e4a65334a: Status 404 returned error can't find the container with id 4ca111ef4be62d78c7a1ed21e6a44df07dbf900d08c75258fb1b742e4a65334a
	Nov 20 21:39:20 ha-409851 kubelet[809]: W1120 21:39:20.042029     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-42485995e8876f34db7501ec41a59804a4ed9ae2116ef9d43f971450342dbf13 WatchSource:0}: Error finding container 42485995e8876f34db7501ec41a59804a4ed9ae2116ef9d43f971450342dbf13: Status 404 returned error can't find the container with id 42485995e8876f34db7501ec41a59804a4ed9ae2116ef9d43f971450342dbf13
	Nov 20 21:39:20 ha-409851 kubelet[809]: I1120 21:39:20.341590     809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ab4d253eaf1d40f90b8f9740737427" path="/var/lib/kubelet/pods/50ab4d253eaf1d40f90b8f9740737427/volumes"
	Nov 20 21:39:38 ha-409851 kubelet[809]: E1120 21:39:38.225519     809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6\": container with ID starting with 637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6 not found: ID does not exist" containerID="637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6"
	Nov 20 21:39:38 ha-409851 kubelet[809]: I1120 21:39:38.226038     809 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6" err="rpc error: code = NotFound desc = could not find container \"637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6\": container with ID starting with 637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6 not found: ID does not exist"
	Nov 20 21:39:38 ha-409851 kubelet[809]: E1120 21:39:38.226672     809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b\": container with ID starting with e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b not found: ID does not exist" containerID="e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b"
	Nov 20 21:39:38 ha-409851 kubelet[809]: I1120 21:39:38.226841     809 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b" err="rpc error: code = NotFound desc = could not find container \"e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b\": container with ID starting with e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b not found: ID does not exist"
	Nov 20 21:39:50 ha-409851 kubelet[809]: I1120 21:39:50.632093     809 scope.go:117] "RemoveContainer" containerID="21c3c6a6f55d40a36bf5628afc1fc7cfc6b87251643b9599eab6ab7a2a06740d"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-409851 -n ha-409851
helpers_test.go:269: (dbg) Run:  kubectl --context ha-409851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (448.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-409851" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-409851\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-409851\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-409851\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-409851
helpers_test.go:243: (dbg) docker inspect ha-409851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	        "Created": "2025-11-20T21:32:05.722530265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 884396,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:38:31.055844346Z",
	            "FinishedAt": "2025-11-20T21:38:30.436661317Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hostname",
	        "HostsPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hosts",
	        "LogPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853-json.log",
	        "Name": "/ha-409851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-409851:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-409851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	                "LowerDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-409851",
	                "Source": "/var/lib/docker/volumes/ha-409851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-409851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-409851",
	                "name.minikube.sigs.k8s.io": "ha-409851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8599a98b0ccff252f0c8c9aad9b46a3b9148a590bf903962ae9e74255b1d7bab",
	            "SandboxKey": "/var/run/docker/netns/8599a98b0ccf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33917"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-409851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:b7:48:6c:96:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad232b357b1bc65babf7a48f3581b00686ef0ccc0f86acee1a57f8a071f682f1",
	                    "EndpointID": "4581080836f9e1d498ecfc4ffb90702bf2c1e0bf832ae79ac8d4da9d8f193945",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-409851",
	                        "d20916d298c9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-409851 -n ha-409851
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 logs -n 25: (1.544103696s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m02 sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851-m02.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ cp      │ ha-409851 cp ha-409851-m03:/home/docker/cp-test.txt ha-409851-m04:/home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ cp      │ ha-409851 cp testdata/cp-test.txt ha-409851-m04:/home/docker/cp-test.txt                                                            │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668750254/001/cp-test_ha-409851-m04.txt │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851:/home/docker/cp-test_ha-409851-m04_ha-409851.txt                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851.txt                                                │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m02:/home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m02 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m03:/home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node start m02 --alsologtostderr -v 5                                                                                     │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │                     │
	│ stop    │ ha-409851 stop --alsologtostderr -v 5                                                                                               │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:38 UTC │
	│ start   │ ha-409851 start --wait true --alsologtostderr -v 5                                                                                  │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:38 UTC │                     │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │                     │
	│ node    │ ha-409851 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │ 20 Nov 25 21:45 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:38:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:38:30.769876  884264 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:38:30.770088  884264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:38:30.770114  884264 out.go:374] Setting ErrFile to fd 2...
	I1120 21:38:30.770133  884264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:38:30.770657  884264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:38:30.771309  884264 out.go:368] Setting JSON to false
	I1120 21:38:30.772185  884264 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15636,"bootTime":1763659075,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:38:30.772284  884264 start.go:143] virtualization:  
	I1120 21:38:30.775797  884264 out.go:179] * [ha-409851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:38:30.779473  884264 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:38:30.779630  884264 notify.go:221] Checking for updates...
	I1120 21:38:30.785039  884264 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:38:30.787825  884264 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:38:30.790672  884264 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:38:30.793534  884264 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:38:30.796313  884264 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:38:30.799725  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:30.799830  884264 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:38:30.836806  884264 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:38:30.836950  884264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:38:30.901769  884264 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:38:30.892669658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:38:30.901887  884264 docker.go:319] overlay module found
	I1120 21:38:30.904943  884264 out.go:179] * Using the docker driver based on existing profile
	I1120 21:38:30.907794  884264 start.go:309] selected driver: docker
	I1120 21:38:30.907812  884264 start.go:930] validating driver "docker" against &{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:38:30.907982  884264 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:38:30.908085  884264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:38:30.967881  884264 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:38:30.95851914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:38:30.968308  884264 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:38:30.968343  884264 cni.go:84] Creating CNI manager for ""
	I1120 21:38:30.968403  884264 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 21:38:30.968455  884264 start.go:353] cluster config:
	{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:38:30.971749  884264 out.go:179] * Starting "ha-409851" primary control-plane node in "ha-409851" cluster
	I1120 21:38:30.974680  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:38:30.977600  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:38:30.980407  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:38:30.980458  884264 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:38:30.980472  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:38:30.980485  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:38:30.980567  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:38:30.980578  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:38:30.980718  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:30.999616  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:38:30.999641  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:38:30.999654  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:38:30.999678  884264 start.go:360] acquireMachinesLock for ha-409851: {Name:mk8d4d263fd846febb903e54335147f9d639d302 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:38:30.999743  884264 start.go:364] duration metric: took 37.309µs to acquireMachinesLock for "ha-409851"
	I1120 21:38:30.999781  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:38:30.999790  884264 fix.go:54] fixHost starting: 
	I1120 21:38:31.000072  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:38:31.018393  884264 fix.go:112] recreateIfNeeded on ha-409851: state=Stopped err=<nil>
	W1120 21:38:31.018439  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:38:31.021858  884264 out.go:252] * Restarting existing docker container for "ha-409851" ...
	I1120 21:38:31.021974  884264 cli_runner.go:164] Run: docker start ha-409851
	I1120 21:38:31.304211  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:38:31.327776  884264 kic.go:430] container "ha-409851" state is running.
	I1120 21:38:31.328187  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:38:31.353945  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:31.354443  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:38:31.354512  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:31.382173  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:31.382524  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:31.382534  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:38:31.383289  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:38:34.531685  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:38:34.531763  884264 ubuntu.go:182] provisioning hostname "ha-409851"
	I1120 21:38:34.531863  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:34.551282  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:34.551609  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:34.551626  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851 && echo "ha-409851" | sudo tee /etc/hostname
	I1120 21:38:34.704765  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:38:34.704852  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:34.723366  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:34.723694  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:34.723717  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:38:34.867982  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:38:34.868025  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:38:34.868088  884264 ubuntu.go:190] setting up certificates
	I1120 21:38:34.868104  884264 provision.go:84] configureAuth start
	I1120 21:38:34.868188  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:38:34.887153  884264 provision.go:143] copyHostCerts
	I1120 21:38:34.887208  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:34.887270  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:38:34.887291  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:34.887383  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:38:34.887509  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:34.887538  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:38:34.887549  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:34.887584  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:38:34.887659  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:34.887686  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:38:34.887694  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:34.887724  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:38:34.887782  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851 san=[127.0.0.1 192.168.49.2 ha-409851 localhost minikube]
	I1120 21:38:35.400008  884264 provision.go:177] copyRemoteCerts
	I1120 21:38:35.400088  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:38:35.400141  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:35.418360  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:35.518831  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:38:35.518950  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:38:35.537804  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:38:35.537900  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1120 21:38:35.556580  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:38:35.556644  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:38:35.575458  884264 provision.go:87] duration metric: took 707.296985ms to configureAuth
	I1120 21:38:35.575487  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:38:35.575723  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:35.575844  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:35.594086  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:35.594409  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33917 <nil> <nil>}
	I1120 21:38:35.594430  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:38:35.962817  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:38:35.962837  884264 machine.go:97] duration metric: took 4.608380541s to provisionDockerMachine
	I1120 21:38:35.962848  884264 start.go:293] postStartSetup for "ha-409851" (driver="docker")
	I1120 21:38:35.962859  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:38:35.962920  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:38:35.962989  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:35.984847  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.091216  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:38:36.094852  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:38:36.094880  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:38:36.094891  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:38:36.094947  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:38:36.095090  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:38:36.095099  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:38:36.095212  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:38:36.102846  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:36.120698  884264 start.go:296] duration metric: took 157.834355ms for postStartSetup
	I1120 21:38:36.120824  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:38:36.120914  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:36.138055  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.236342  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:38:36.241086  884264 fix.go:56] duration metric: took 5.241287155s for fixHost
	I1120 21:38:36.241113  884264 start.go:83] releasing machines lock for "ha-409851", held for 5.241354183s
	I1120 21:38:36.241193  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:38:36.259831  884264 ssh_runner.go:195] Run: cat /version.json
	I1120 21:38:36.259893  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:36.260152  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:38:36.260229  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:36.287560  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.292613  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:36.386937  884264 ssh_runner.go:195] Run: systemctl --version
	I1120 21:38:36.496830  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:38:36.537327  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:38:36.541923  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:38:36.542024  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:38:36.549865  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:38:36.549933  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:38:36.549983  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:38:36.550070  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:38:36.565179  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:38:36.578552  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:38:36.578675  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:38:36.594881  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:38:36.608683  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:38:36.731342  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:38:36.868669  884264 docker.go:234] disabling docker service ...
	I1120 21:38:36.868857  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:38:36.886109  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:38:36.900226  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:38:37.014736  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:38:37.144034  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:38:37.158890  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:38:37.173954  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:38:37.174053  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.183273  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:38:37.183345  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.192471  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.201342  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.210418  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:38:37.218694  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.227957  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.236515  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:37.245491  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:38:37.253272  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:38:37.260653  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:37.378780  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:38:37.568343  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:38:37.568517  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:38:37.572886  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:38:37.572998  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:38:37.576787  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:38:37.603768  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:38:37.603878  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:37.634707  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:37.668026  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:38:37.670996  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:38:37.688086  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:38:37.692097  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:37.702318  884264 kubeadm.go:884] updating cluster {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:38:37.702473  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:38:37.702533  884264 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:38:37.738810  884264 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:38:37.738882  884264 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:38:37.739011  884264 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:38:37.764274  884264 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:38:37.764295  884264 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:38:37.764305  884264 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 21:38:37.764401  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:38:37.764481  884264 ssh_runner.go:195] Run: crio config
	I1120 21:38:37.825630  884264 cni.go:84] Creating CNI manager for ""
	I1120 21:38:37.825661  884264 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 21:38:37.825685  884264 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:38:37.825743  884264 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-409851 NodeName:ha-409851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:38:37.825905  884264 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-409851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:38:37.825931  884264 kube-vip.go:115] generating kube-vip config ...
	I1120 21:38:37.825986  884264 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:38:37.839066  884264 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:38:37.839175  884264 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:38:37.839248  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:38:37.847133  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:38:37.847235  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 21:38:37.855412  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 21:38:37.868477  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:38:37.881823  884264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1120 21:38:37.895195  884264 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:38:37.908845  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:38:37.912943  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:37.923133  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:38.049716  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:38:38.067155  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.2
	I1120 21:38:38.067178  884264 certs.go:195] generating shared ca certs ...
	I1120 21:38:38.067197  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:38.067386  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:38:38.067464  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:38:38.067494  884264 certs.go:257] generating profile certs ...
	I1120 21:38:38.067639  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:38:38.067683  884264 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56
	I1120 21:38:38.067722  884264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1120 21:38:38.134399  884264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56 ...
	I1120 21:38:38.134432  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56: {Name:mk7acbd3c6c1dd357ee45d74f751ed3339a8f067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:38.134668  884264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56 ...
	I1120 21:38:38.134693  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56: {Name:mkd0412497c04b2292f00ce455371fa1840c4bc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:38.134834  884264 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.f7e7ae56 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt
	I1120 21:38:38.135032  884264 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.f7e7ae56 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key
	I1120 21:38:38.135229  884264 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:38:38.135248  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:38:38.135280  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:38:38.135304  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:38:38.135321  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:38:38.135350  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:38:38.135384  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:38:38.135407  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:38:38.135423  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:38:38.135493  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:38:38.135556  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:38:38.135571  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:38:38.135614  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:38:38.135660  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:38:38.135691  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:38:38.135764  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:38.135818  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.135841  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.135858  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.136478  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:38:38.161386  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:38:38.183426  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:38:38.209571  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:38:38.230449  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:38:38.269189  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:38:38.290285  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:38:38.310366  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:38:38.336702  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:38:38.356298  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:38:38.377772  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:38:38.397354  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:38:38.410774  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:38:38.417590  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.426055  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:38:38.435256  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.442057  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.442128  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:38:38.484356  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:38:38.492206  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.499992  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:38:38.507965  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.512359  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.512476  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:38:38.554117  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:38:38.562052  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.569885  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:38:38.578289  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.582380  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.582505  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:38.624140  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:38:38.633756  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:38:38.637748  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:38:38.679477  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:38:38.725454  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:38:38.767445  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:38:38.816551  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:38:38.874060  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:38:38.945404  884264 kubeadm.go:401] StartCluster: {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:38:38.945592  884264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:38:38.945702  884264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:38:39.035653  884264 cri.go:89] found id: "5c78de3db456c35c2eafd8be0e59c965664f006cb3e9b19c4d9b05b81ab079fc"
	I1120 21:38:39.035728  884264 cri.go:89] found id: "be96e9e3ffb4708dccf24988f485136e1039f591a2e9c93edef5d830431fa080"
	I1120 21:38:39.035748  884264 cri.go:89] found id: "b40d2cfd438a8dc3a5f89de00510928701b9ef1887f2f4f9055a3978ea2197fa"
	I1120 21:38:39.035769  884264 cri.go:89] found id: "696b700dcb568291344392af5fbbff9e59bb78b02bbbf2fa18e2156bab42fae1"
	I1120 21:38:39.035804  884264 cri.go:89] found id: "bbe2aa5c20be55307484a6dc5e0cf27f1adb8b5e2bad7448657394d0153a3e84"
	I1120 21:38:39.035846  884264 cri.go:89] found id: ""
	I1120 21:38:39.035929  884264 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:38:39.060419  884264 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:38:39Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:38:39.060556  884264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:38:39.074901  884264 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:38:39.074968  884264 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:38:39.075123  884264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:38:39.088673  884264 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:38:39.089259  884264 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-409851" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:38:39.089441  884264 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "ha-409851" cluster setting kubeconfig missing "ha-409851" context setting]
	I1120 21:38:39.089845  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:39.090518  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:38:39.091335  884264 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 21:38:39.091424  884264 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 21:38:39.091402  884264 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 21:38:39.091527  884264 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 21:38:39.091559  884264 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 21:38:39.091579  884264 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 21:38:39.091949  884264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:38:39.104395  884264 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 21:38:39.104468  884264 kubeadm.go:602] duration metric: took 29.411064ms to restartPrimaryControlPlane
	I1120 21:38:39.104495  884264 kubeadm.go:403] duration metric: took 159.115003ms to StartCluster
	I1120 21:38:39.104539  884264 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:39.104635  884264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:38:39.105401  884264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:39.105666  884264 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:38:39.105723  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:38:39.105753  884264 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:38:39.106516  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:39.111744  884264 out.go:179] * Enabled addons: 
	I1120 21:38:39.114735  884264 addons.go:515] duration metric: took 8.971082ms for enable addons: enabled=[]
	I1120 21:38:39.114834  884264 start.go:247] waiting for cluster config update ...
	I1120 21:38:39.114858  884264 start.go:256] writing updated cluster config ...
	I1120 21:38:39.118409  884264 out.go:203] 
	I1120 21:38:39.121722  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:39.121897  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:39.125210  884264 out.go:179] * Starting "ha-409851-m02" control-plane node in "ha-409851" cluster
	I1120 21:38:39.128166  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:38:39.131274  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:38:39.134220  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:38:39.134243  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:38:39.134349  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:38:39.134358  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:38:39.134481  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:39.134707  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:38:39.163368  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:38:39.163387  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:38:39.163399  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:38:39.163424  884264 start.go:360] acquireMachinesLock for ha-409851-m02: {Name:mka809540f7c511f76e83dac3b1218011243fbec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:38:39.163475  884264 start.go:364] duration metric: took 37.473µs to acquireMachinesLock for "ha-409851-m02"
	I1120 21:38:39.163495  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:38:39.163500  884264 fix.go:54] fixHost starting: m02
	I1120 21:38:39.163761  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:38:39.188597  884264 fix.go:112] recreateIfNeeded on ha-409851-m02: state=Stopped err=<nil>
	W1120 21:38:39.188621  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:38:39.197319  884264 out.go:252] * Restarting existing docker container for "ha-409851-m02" ...
	I1120 21:38:39.197414  884264 cli_runner.go:164] Run: docker start ha-409851-m02
	I1120 21:38:39.580228  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:38:39.619726  884264 kic.go:430] container "ha-409851-m02" state is running.
	I1120 21:38:39.620289  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:38:39.645172  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:38:39.645452  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:38:39.645526  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:39.670151  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:39.670895  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:39.670954  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:38:39.671692  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44478->127.0.0.1:33922: read: connection reset by peer
	I1120 21:38:42.978516  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:38:42.978591  884264 ubuntu.go:182] provisioning hostname "ha-409851-m02"
	I1120 21:38:42.978693  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:43.005096  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:43.005433  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:43.005447  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m02 && echo "ha-409851-m02" | sudo tee /etc/hostname
	I1120 21:38:43.320783  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:38:43.320866  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:43.374875  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:43.375237  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:43.375260  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:38:43.620767  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:38:43.620794  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:38:43.620810  884264 ubuntu.go:190] setting up certificates
	I1120 21:38:43.620821  884264 provision.go:84] configureAuth start
	I1120 21:38:43.620881  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:38:43.659411  884264 provision.go:143] copyHostCerts
	I1120 21:38:43.659453  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:43.659485  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:38:43.659493  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:38:43.659567  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:38:43.659644  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:43.659661  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:38:43.659665  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:38:43.659690  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:38:43.659728  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:43.659743  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:38:43.659747  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:38:43.659768  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:38:43.659814  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m02 san=[127.0.0.1 192.168.49.3 ha-409851-m02 localhost minikube]
	I1120 21:38:44.333480  884264 provision.go:177] copyRemoteCerts
	I1120 21:38:44.333555  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:38:44.333605  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:44.352064  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:44.461767  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:38:44.461834  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:38:44.500018  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:38:44.500084  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:38:44.547484  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:38:44.547557  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:38:44.596357  884264 provision.go:87] duration metric: took 975.522241ms to configureAuth
	I1120 21:38:44.596401  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:38:44.596654  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:44.596788  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:44.624344  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:38:44.624651  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33922 <nil> <nil>}
	I1120 21:38:44.624670  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:38:45.322074  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:38:45.322113  884264 machine.go:97] duration metric: took 5.676650753s to provisionDockerMachine
	I1120 21:38:45.322128  884264 start.go:293] postStartSetup for "ha-409851-m02" (driver="docker")
	I1120 21:38:45.322141  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:38:45.322226  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:38:45.322277  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.342731  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:45.453499  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:38:45.470888  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:38:45.470938  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:38:45.470950  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:38:45.471014  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:38:45.471096  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:38:45.471109  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:38:45.471214  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:38:45.489273  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:45.556457  884264 start.go:296] duration metric: took 234.311564ms for postStartSetup
	I1120 21:38:45.556611  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:38:45.556676  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.587707  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:45.729685  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:38:45.740986  884264 fix.go:56] duration metric: took 6.577477813s for fixHost
	I1120 21:38:45.741008  884264 start.go:83] releasing machines lock for "ha-409851-m02", held for 6.577525026s
	I1120 21:38:45.741083  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:38:45.771820  884264 out.go:179] * Found network options:
	I1120 21:38:45.774905  884264 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 21:38:45.777764  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:38:45.777810  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:38:45.777890  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:38:45.777942  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.778213  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:38:45.778264  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:38:45.814965  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:45.816280  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:38:46.130838  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:38:46.136697  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:38:46.136780  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:38:46.154525  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:38:46.154562  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:38:46.154596  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:38:46.154657  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:38:46.179167  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:38:46.198207  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:38:46.198285  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:38:46.220547  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:38:46.238372  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:38:46.474214  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:38:46.692069  884264 docker.go:234] disabling docker service ...
	I1120 21:38:46.692151  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:38:46.711611  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:38:46.733293  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:38:46.937783  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:38:47.161295  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:38:47.177649  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:38:47.196405  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:38:47.196499  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.211080  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:38:47.211159  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.226280  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.241556  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.251537  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:38:47.263194  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.279048  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.292565  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:38:47.305383  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:38:47.318266  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:38:47.330851  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:47.572162  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:38:47.826907  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:38:47.827027  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:38:47.830650  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:38:47.830757  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:38:47.834471  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:38:47.858658  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:38:47.858770  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:47.887568  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:38:47.924184  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:38:47.927160  884264 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:38:47.930191  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:38:47.947316  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:38:47.951294  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:47.961645  884264 mustload.go:66] Loading cluster: ha-409851
	I1120 21:38:47.961891  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:47.962176  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:38:47.978704  884264 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:38:47.979070  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.3
	I1120 21:38:47.979083  884264 certs.go:195] generating shared ca certs ...
	I1120 21:38:47.979100  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:38:47.979221  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:38:47.979265  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:38:47.979275  884264 certs.go:257] generating profile certs ...
	I1120 21:38:47.979366  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:38:47.979435  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.36974727
	I1120 21:38:47.979478  884264 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:38:47.979491  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:38:47.979505  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:38:47.979525  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:38:47.979536  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:38:47.979550  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:38:47.979561  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:38:47.979576  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:38:47.979587  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:38:47.979641  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:38:47.979672  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:38:47.979689  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:38:47.979713  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:38:47.979738  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:38:47.979762  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:38:47.979804  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:38:47.979840  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:38:47.979855  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:38:47.979869  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:47.979929  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:38:47.996700  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:38:48.095431  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:38:48.099410  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:38:48.107940  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:38:48.111757  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:38:48.120021  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:38:48.123592  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:38:48.132027  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:38:48.135667  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:38:48.143707  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:38:48.147064  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:38:48.155777  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:38:48.159326  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:38:48.168074  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:38:48.187052  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:38:48.204261  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:38:48.222484  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:38:48.239999  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:38:48.257750  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:38:48.275489  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:38:48.293203  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:38:48.310644  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:38:48.333442  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:38:48.353223  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:38:48.371976  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:38:48.384868  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:38:48.397625  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:38:48.410587  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:38:48.423732  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:38:48.437291  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:38:48.449732  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:38:48.462200  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:38:48.468726  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.476219  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:38:48.483790  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.487957  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.488071  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:38:48.529603  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:38:48.541715  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.551230  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:38:48.560557  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.566086  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.566214  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:38:48.614556  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:38:48.622341  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.630607  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:38:48.638692  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.642390  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.642458  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:38:48.683660  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:38:48.692961  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:38:48.697105  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:38:48.738157  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:38:48.779134  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:38:48.820771  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:38:48.861964  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:38:48.903079  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:38:48.946240  884264 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 21:38:48.946401  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:38:48.946432  884264 kube-vip.go:115] generating kube-vip config ...
	I1120 21:38:48.946494  884264 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:38:48.959247  884264 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:38:48.959318  884264 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:38:48.959400  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:38:48.967383  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:38:48.967482  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:38:48.975230  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:38:48.988715  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:38:49.001843  884264 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:38:49.019090  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:38:49.023118  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:38:49.034137  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:49.154884  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:38:49.169065  884264 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:38:49.169534  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:38:49.173571  884264 out.go:179] * Verifying Kubernetes components...
	I1120 21:38:49.176570  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:38:49.315404  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:38:49.329975  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:38:49.330049  884264 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:38:49.330298  884264 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m02" to be "Ready" ...
	W1120 21:38:59.331759  884264 node_ready.go:55] error getting node "ha-409851-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-409851-m02": net/http: TLS handshake timeout
	I1120 21:39:02.652543  884264 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-409851-m02"
	W1120 21:39:12.654218  884264 node_ready.go:55] error getting node "ha-409851-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-409851-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.49.1:48284->192.168.49.2:8443: read: connection reset by peer
	I1120 21:39:13.752634  884264 node_ready.go:49] node "ha-409851-m02" is "Ready"
	I1120 21:39:13.752662  884264 node_ready.go:38] duration metric: took 24.422335125s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:39:13.752675  884264 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:39:13.752734  884264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:39:13.802621  884264 api_server.go:72] duration metric: took 24.633509474s to wait for apiserver process to appear ...
	I1120 21:39:13.802644  884264 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:39:13.802666  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:13.846540  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:39:13.846565  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:39:14.303057  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:14.317076  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:14.317121  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:14.803756  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:14.835165  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:14.835252  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:15.302766  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:15.327917  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:15.327996  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:15.802846  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:15.844402  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:39:15.844486  884264 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:39:16.302774  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:39:16.349139  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:39:16.355368  884264 api_server.go:141] control plane version: v1.34.1
	I1120 21:39:16.355451  884264 api_server.go:131] duration metric: took 2.552797549s to wait for apiserver health ...
	I1120 21:39:16.355475  884264 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:39:16.388991  884264 system_pods.go:59] 26 kube-system pods found
	I1120 21:39:16.389076  884264 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:39:16.389097  884264 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:39:16.389116  884264 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:39:16.389161  884264 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:39:16.389188  884264 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:39:16.389225  884264 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:39:16.389248  884264 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:39:16.389268  884264 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:39:16.389303  884264 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:39:16.389327  884264 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:39:16.389347  884264 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:39:16.389382  884264 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:39:16.389405  884264 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:39:16.389426  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:39:16.389462  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:39:16.389484  884264 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:39:16.389501  884264 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:39:16.389582  884264 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:39:16.389609  884264 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:39:16.389631  884264 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:39:16.389670  884264 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:39:16.389696  884264 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:39:16.389718  884264 system_pods.go:61] "kube-vip-ha-409851" [714ee0ad-584f-4bd3-b031-cc6e2485512c] Running
	I1120 21:39:16.389753  884264 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:39:16.389791  884264 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:39:16.389812  884264 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:39:16.389848  884264 system_pods.go:74] duration metric: took 34.353977ms to wait for pod list to return data ...
	I1120 21:39:16.389871  884264 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:39:16.416752  884264 default_sa.go:45] found service account: "default"
	I1120 21:39:16.416829  884264 default_sa.go:55] duration metric: took 26.934653ms for default service account to be created ...
	I1120 21:39:16.416854  884264 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:39:16.495655  884264 system_pods.go:86] 26 kube-system pods found
	I1120 21:39:16.495738  884264 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:39:16.495762  884264 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:39:16.495799  884264 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:39:16.495829  884264 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:39:16.495850  884264 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:39:16.495891  884264 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:39:16.495919  884264 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:39:16.495943  884264 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:39:16.495976  884264 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:39:16.496003  884264 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:39:16.496027  884264 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:39:16.496065  884264 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:39:16.496119  884264 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:39:16.496154  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:39:16.496175  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:39:16.496206  884264 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:39:16.496230  884264 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:39:16.496253  884264 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:39:16.496290  884264 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:39:16.496316  884264 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:39:16.496339  884264 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:39:16.496376  884264 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:39:16.496404  884264 system_pods.go:89] "kube-vip-ha-409851" [714ee0ad-584f-4bd3-b031-cc6e2485512c] Running
	I1120 21:39:16.496424  884264 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:39:16.496462  884264 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:39:16.496488  884264 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:39:16.496514  884264 system_pods.go:126] duration metric: took 79.640825ms to wait for k8s-apps to be running ...
	I1120 21:39:16.496549  884264 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:39:16.496649  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:39:16.525131  884264 system_svc.go:56] duration metric: took 28.572383ms WaitForService to wait for kubelet
	I1120 21:39:16.525221  884264 kubeadm.go:587] duration metric: took 27.356113948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:39:16.525256  884264 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:39:16.547500  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547592  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547622  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547645  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547686  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547706  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547727  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:39:16.547760  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:39:16.547787  884264 node_conditions.go:105] duration metric: took 22.508874ms to run NodePressure ...
	I1120 21:39:16.547814  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:39:16.547869  884264 start.go:256] writing updated cluster config ...
	I1120 21:39:16.551433  884264 out.go:203] 
	I1120 21:39:16.554880  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:39:16.555111  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:39:16.558694  884264 out.go:179] * Starting "ha-409851-m03" control-plane node in "ha-409851" cluster
	I1120 21:39:16.562364  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:39:16.565426  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:39:16.568528  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:39:16.568640  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:39:16.568611  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:39:16.568996  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:39:16.569028  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:39:16.569191  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:39:16.590195  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:39:16.590214  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:39:16.590225  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:39:16.590248  884264 start.go:360] acquireMachinesLock for ha-409851-m03: {Name:mkdc61c72ab6a67582f9ee213a06b683b619e587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:39:16.590297  884264 start.go:364] duration metric: took 34.011µs to acquireMachinesLock for "ha-409851-m03"
	I1120 21:39:16.590316  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:39:16.590321  884264 fix.go:54] fixHost starting: m03
	I1120 21:39:16.590574  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m03 --format={{.State.Status}}
	I1120 21:39:16.615086  884264 fix.go:112] recreateIfNeeded on ha-409851-m03: state=Stopped err=<nil>
	W1120 21:39:16.615115  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:39:16.618135  884264 out.go:252] * Restarting existing docker container for "ha-409851-m03" ...
	I1120 21:39:16.618225  884264 cli_runner.go:164] Run: docker start ha-409851-m03
	I1120 21:39:16.978914  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m03 --format={{.State.Status}}
	I1120 21:39:17.006894  884264 kic.go:430] container "ha-409851-m03" state is running.
	I1120 21:39:17.007317  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:39:17.038413  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:39:17.038674  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:39:17.038742  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:17.068281  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:17.068584  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:17.068592  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:39:17.070869  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:39:20.309993  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m03
	
	I1120 21:39:20.310063  884264 ubuntu.go:182] provisioning hostname "ha-409851-m03"
	I1120 21:39:20.310163  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:20.336716  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:20.337029  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:20.337043  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m03 && echo "ha-409851-m03" | sudo tee /etc/hostname
	I1120 21:39:20.816264  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m03
	
	I1120 21:39:20.816432  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:20.846177  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:20.846510  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:20.846531  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:39:21.112630  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:39:21.112715  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:39:21.112747  884264 ubuntu.go:190] setting up certificates
	I1120 21:39:21.112788  884264 provision.go:84] configureAuth start
	I1120 21:39:21.112872  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:39:21.141385  884264 provision.go:143] copyHostCerts
	I1120 21:39:21.141425  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:39:21.141458  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:39:21.141465  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:39:21.141537  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:39:21.141610  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:39:21.141626  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:39:21.141631  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:39:21.141657  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:39:21.141696  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:39:21.141713  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:39:21.141717  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:39:21.141739  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:39:21.141793  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m03 san=[127.0.0.1 192.168.49.4 ha-409851-m03 localhost minikube]
	I1120 21:39:21.285547  884264 provision.go:177] copyRemoteCerts
	I1120 21:39:21.285659  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:39:21.285756  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:21.304352  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:21.419419  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:39:21.419479  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:39:21.455413  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:39:21.455471  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:39:21.499343  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:39:21.499449  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:39:21.553711  884264 provision.go:87] duration metric: took 440.893582ms to configureAuth
	I1120 21:39:21.553743  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:39:21.553979  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:39:21.554094  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:21.579157  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:39:21.579463  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33927 <nil> <nil>}
	I1120 21:39:21.579484  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:39:22.222733  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:39:22.222764  884264 machine.go:97] duration metric: took 5.184080337s to provisionDockerMachine
	I1120 21:39:22.222784  884264 start.go:293] postStartSetup for "ha-409851-m03" (driver="docker")
	I1120 21:39:22.222795  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:39:22.222869  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:39:22.222949  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.258502  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.366087  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:39:22.370384  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:39:22.370464  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:39:22.370490  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:39:22.370582  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:39:22.370714  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:39:22.370740  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:39:22.370890  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:39:22.380356  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:39:22.405408  884264 start.go:296] duration metric: took 182.600947ms for postStartSetup
	I1120 21:39:22.405514  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:39:22.405570  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.425307  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.524350  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:39:22.529911  884264 fix.go:56] duration metric: took 5.939581904s for fixHost
	I1120 21:39:22.529937  884264 start.go:83] releasing machines lock for "ha-409851-m03", held for 5.939631735s
	I1120 21:39:22.530012  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:39:22.551424  884264 out.go:179] * Found network options:
	I1120 21:39:22.560397  884264 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 21:39:22.563475  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:39:22.563504  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:39:22.563526  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:39:22.563536  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:39:22.563629  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:39:22.563664  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:39:22.563687  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.563722  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:39:22.593348  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.599158  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33927 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:39:22.850591  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:39:22.957812  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:39:22.957885  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:39:22.971629  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:39:22.971651  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:39:22.971683  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:39:22.971740  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:39:22.992266  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:39:23.017885  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:39:23.018003  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:39:23.047686  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:39:23.071594  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:39:23.341231  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:39:23.618998  884264 docker.go:234] disabling docker service ...
	I1120 21:39:23.619120  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:39:23.641818  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:39:23.676773  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:39:23.963173  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:39:24.189401  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:39:24.206793  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:39:24.222800  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:39:24.222943  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.233205  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:39:24.233339  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.242572  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.252400  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.262758  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:39:24.283691  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.293195  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.301843  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:39:24.310942  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:39:24.319806  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:39:24.328026  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:39:24.598997  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:40:54.919407  884264 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320335625s)
	I1120 21:40:54.919437  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:40:54.919501  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:40:54.923827  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:40:54.923896  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:40:54.927766  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:40:54.956875  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:40:54.956961  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:40:54.989990  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:40:55.031599  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:40:55.034874  884264 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:40:55.042500  884264 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:40:55.050091  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:40:55.084630  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:40:55.090169  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:40:55.103094  884264 mustload.go:66] Loading cluster: ha-409851
	I1120 21:40:55.103394  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:40:55.103694  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:40:55.127072  884264 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:40:55.127420  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.4
	I1120 21:40:55.127444  884264 certs.go:195] generating shared ca certs ...
	I1120 21:40:55.127465  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:40:55.127604  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:40:55.127650  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:40:55.127662  884264 certs.go:257] generating profile certs ...
	I1120 21:40:55.127765  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:40:55.127891  884264 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.b859e16b
	I1120 21:40:55.127933  884264 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:40:55.127943  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:40:55.127956  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:40:55.127969  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:40:55.127980  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:40:55.127992  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:40:55.128006  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:40:55.128033  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:40:55.128045  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:40:55.128112  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:40:55.128145  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:40:55.128160  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:40:55.128187  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:40:55.128214  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:40:55.128241  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:40:55.128290  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:40:55.128326  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.128344  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.128357  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.128426  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:40:55.150727  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:40:55.251340  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:40:55.256433  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:40:55.266784  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:40:55.270534  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:40:55.279775  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:40:55.284275  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:40:55.294321  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:40:55.298684  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:40:55.307319  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:40:55.310734  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:40:55.319458  884264 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:40:55.323063  884264 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:40:55.331533  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:40:55.350148  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:40:55.371874  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:40:55.394257  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:40:55.416142  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:40:55.436749  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:40:55.457715  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:40:55.490155  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:40:55.512635  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:40:55.534827  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:40:55.566135  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:40:55.588247  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:40:55.601998  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:40:55.617348  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:40:55.631678  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:40:55.644956  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:40:55.658910  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:40:55.674549  884264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:40:55.689850  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:40:55.697169  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.706702  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:40:55.715708  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.719673  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.719798  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:40:55.761953  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:40:55.770722  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.779665  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:40:55.796200  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.800339  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.800460  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:40:55.842260  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:40:55.849720  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.857782  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:40:55.865998  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.870179  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.870265  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:40:55.917536  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:40:55.925307  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:40:55.929384  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:40:55.971056  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:40:56.013165  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:40:56.055581  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:40:56.098307  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:40:56.140587  884264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:40:56.181956  884264 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1120 21:40:56.182053  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:40:56.182091  884264 kube-vip.go:115] generating kube-vip config ...
	I1120 21:40:56.182144  884264 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:40:56.195065  884264 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:40:56.195123  884264 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:40:56.195188  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:40:56.203155  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:40:56.203249  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:40:56.210881  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:40:56.226182  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:40:56.241370  884264 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:40:56.258633  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:40:56.262629  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:40:56.274206  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:40:56.407402  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:40:56.425980  884264 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:40:56.426593  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:40:56.429208  884264 out.go:179] * Verifying Kubernetes components...
	I1120 21:40:56.432088  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:40:56.603926  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:40:56.618659  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:40:56.618769  884264 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:40:56.619068  884264 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m03" to be "Ready" ...
	W1120 21:40:58.623454  884264 node_ready.go:57] node "ha-409851-m03" has "Ready":"Unknown" status (will retry)
	W1120 21:41:00.623718  884264 node_ready.go:57] node "ha-409851-m03" has "Ready":"Unknown" status (will retry)
	I1120 21:41:03.122881  884264 node_ready.go:49] node "ha-409851-m03" is "Ready"
	I1120 21:41:03.122915  884264 node_ready.go:38] duration metric: took 6.503802683s for node "ha-409851-m03" to be "Ready" ...
	I1120 21:41:03.122931  884264 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:41:03.123035  884264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:41:03.138113  884264 api_server.go:72] duration metric: took 6.712035257s to wait for apiserver process to appear ...
	I1120 21:41:03.138137  884264 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:41:03.138156  884264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:41:03.152932  884264 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:41:03.154364  884264 api_server.go:141] control plane version: v1.34.1
	I1120 21:41:03.154387  884264 api_server.go:131] duration metric: took 16.242967ms to wait for apiserver health ...
	I1120 21:41:03.154396  884264 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:41:03.163795  884264 system_pods.go:59] 26 kube-system pods found
	I1120 21:41:03.163878  884264 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:41:03.163902  884264 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:41:03.163924  884264 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:41:03.163958  884264 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:41:03.163982  884264 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:41:03.164004  884264 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:41:03.164039  884264 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:41:03.164064  884264 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:41:03.164085  884264 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:41:03.164120  884264 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:41:03.164142  884264 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:41:03.164160  884264 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:41:03.164181  884264 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:41:03.164218  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:41:03.164236  884264 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:41:03.164257  884264 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:41:03.164295  884264 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:41:03.164319  884264 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:41:03.164339  884264 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:41:03.164374  884264 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:41:03.164397  884264 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:41:03.164414  884264 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:41:03.164435  884264 system_pods.go:61] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:41:03.164470  884264 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:41:03.164490  884264 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:41:03.164510  884264 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:41:03.164542  884264 system_pods.go:74] duration metric: took 10.139581ms to wait for pod list to return data ...
	I1120 21:41:03.164569  884264 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:41:03.171615  884264 default_sa.go:45] found service account: "default"
	I1120 21:41:03.171638  884264 default_sa.go:55] duration metric: took 7.047374ms for default service account to be created ...
	I1120 21:41:03.171648  884264 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:41:03.265734  884264 system_pods.go:86] 26 kube-system pods found
	I1120 21:41:03.267572  884264 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running
	I1120 21:41:03.267646  884264 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running
	I1120 21:41:03.267710  884264 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:41:03.267791  884264 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:41:03.267818  884264 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:41:03.267839  884264 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:41:03.267876  884264 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:41:03.267901  884264 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:41:03.267953  884264 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running
	I1120 21:41:03.267979  884264 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running
	I1120 21:41:03.268035  884264 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:41:03.268061  884264 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:41:03.268111  884264 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running
	I1120 21:41:03.268136  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:41:03.268187  884264 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running
	I1120 21:41:03.268216  884264 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running
	I1120 21:41:03.268276  884264 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:41:03.268345  884264 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:41:03.268371  884264 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:41:03.268391  884264 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:41:03.268432  884264 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:41:03.268515  884264 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:41:03.268541  884264 system_pods.go:89] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:41:03.268560  884264 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:41:03.269441  884264 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:41:03.269511  884264 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running
	I1120 21:41:03.269535  884264 system_pods.go:126] duration metric: took 97.879853ms to wait for k8s-apps to be running ...
	I1120 21:41:03.269960  884264 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:41:03.270187  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:41:03.292101  884264 system_svc.go:56] duration metric: took 22.131508ms WaitForService to wait for kubelet
	I1120 21:41:03.292181  884264 kubeadm.go:587] duration metric: took 6.866108619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:41:03.292218  884264 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:41:03.296374  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296410  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296423  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296428  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296434  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296439  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296443  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:03.296447  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:03.296452  884264 node_conditions.go:105] duration metric: took 4.198189ms to run NodePressure ...
	I1120 21:41:03.296468  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:41:03.296492  884264 start.go:256] writing updated cluster config ...
	I1120 21:41:03.300140  884264 out.go:203] 
	I1120 21:41:03.304344  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:03.304532  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:41:03.307946  884264 out.go:179] * Starting "ha-409851-m04" worker node in "ha-409851" cluster
	I1120 21:41:03.311732  884264 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:41:03.314710  884264 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:41:03.317785  884264 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:41:03.317884  884264 cache.go:65] Caching tarball of preloaded images
	I1120 21:41:03.318031  884264 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:41:03.318080  884264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:41:03.317859  884264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:41:03.318453  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:41:03.344793  884264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:41:03.344812  884264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:41:03.344825  884264 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:41:03.344848  884264 start.go:360] acquireMachinesLock for ha-409851-m04: {Name:mk87280fc97adfe0461a2851d285457d7b179a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:41:03.344898  884264 start.go:364] duration metric: took 35.644µs to acquireMachinesLock for "ha-409851-m04"
	I1120 21:41:03.344917  884264 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:41:03.344922  884264 fix.go:54] fixHost starting: m04
	I1120 21:41:03.345209  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:41:03.376330  884264 fix.go:112] recreateIfNeeded on ha-409851-m04: state=Stopped err=<nil>
	W1120 21:41:03.376356  884264 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:41:03.379471  884264 out.go:252] * Restarting existing docker container for "ha-409851-m04" ...
	I1120 21:41:03.379560  884264 cli_runner.go:164] Run: docker start ha-409851-m04
	I1120 21:41:03.742042  884264 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:41:03.769660  884264 kic.go:430] container "ha-409851-m04" state is running.
	I1120 21:41:03.770657  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:41:03.796776  884264 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:41:03.797038  884264 machine.go:94] provisionDockerMachine start ...
	I1120 21:41:03.797104  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:03.823466  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:03.823770  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:03.823778  884264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:41:03.824435  884264 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:41:06.970676  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:41:06.970701  884264 ubuntu.go:182] provisioning hostname "ha-409851-m04"
	I1120 21:41:06.970765  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:06.990700  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:06.991183  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:06.991203  884264 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m04 && echo "ha-409851-m04" | sudo tee /etc/hostname
	I1120 21:41:07.146851  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:41:07.146933  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:07.166460  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:07.166767  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:07.166788  884264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:41:07.311657  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:41:07.311684  884264 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:41:07.311699  884264 ubuntu.go:190] setting up certificates
	I1120 21:41:07.311712  884264 provision.go:84] configureAuth start
	I1120 21:41:07.311786  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:41:07.331035  884264 provision.go:143] copyHostCerts
	I1120 21:41:07.331091  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:41:07.331124  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:41:07.331136  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:41:07.331213  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:41:07.331298  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:41:07.331322  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:41:07.331326  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:41:07.331352  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:41:07.331393  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:41:07.331415  884264 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:41:07.331422  884264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:41:07.331447  884264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:41:07.331497  884264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m04 san=[127.0.0.1 192.168.49.5 ha-409851-m04 localhost minikube]
	I1120 21:41:08.623164  884264 provision.go:177] copyRemoteCerts
	I1120 21:41:08.623237  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:41:08.623286  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:08.639718  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:08.747935  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:41:08.748002  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:41:08.773774  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:41:08.773840  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:41:08.801882  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:41:08.801944  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:41:08.828179  884264 provision.go:87] duration metric: took 1.516452919s to configureAuth
	I1120 21:41:08.828204  884264 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:41:08.828439  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:08.828555  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:08.849615  884264 main.go:143] libmachine: Using SSH client type: native
	I1120 21:41:08.849931  884264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I1120 21:41:08.849949  884264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:41:09.190143  884264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:41:09.190166  884264 machine.go:97] duration metric: took 5.39311756s to provisionDockerMachine
	I1120 21:41:09.190177  884264 start.go:293] postStartSetup for "ha-409851-m04" (driver="docker")
	I1120 21:41:09.190190  884264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:41:09.190252  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:41:09.190297  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.211823  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.319209  884264 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:41:09.323014  884264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:41:09.323048  884264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:41:09.323086  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:41:09.323159  884264 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:41:09.323239  884264 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:41:09.323252  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:41:09.323406  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:41:09.331751  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:41:09.350101  884264 start.go:296] duration metric: took 159.908044ms for postStartSetup
	I1120 21:41:09.350192  884264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:41:09.350244  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.368495  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.469917  884264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:41:09.475514  884264 fix.go:56] duration metric: took 6.130583533s for fixHost
	I1120 21:41:09.475537  884264 start.go:83] releasing machines lock for "ha-409851-m04", held for 6.130630836s
	I1120 21:41:09.475607  884264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:41:09.501255  884264 out.go:179] * Found network options:
	I1120 21:41:09.504338  884264 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1120 21:41:09.507242  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507285  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507296  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507328  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507344  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:41:09.507354  884264 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:41:09.507446  884264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:41:09.507499  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.507798  884264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:41:09.507867  884264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:41:09.541478  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.545988  884264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:41:09.688666  884264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:41:09.768175  884264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:41:09.768304  884264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:41:09.777453  884264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:41:09.777480  884264 start.go:496] detecting cgroup driver to use...
	I1120 21:41:09.777528  884264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:41:09.777603  884264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:41:09.798578  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:41:09.812578  884264 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:41:09.812674  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:41:09.835768  884264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:41:09.850693  884264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:41:10.028876  884264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:41:10.166862  884264 docker.go:234] disabling docker service ...
	I1120 21:41:10.166933  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:41:10.183999  884264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:41:10.199107  884264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:41:10.347931  884264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:41:10.487321  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:41:10.501617  884264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:41:10.518198  884264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:41:10.518277  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.527726  884264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:41:10.527803  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.539453  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.549501  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.558643  884264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:41:10.568755  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.581525  884264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.591524  884264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:41:10.602370  884264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:41:10.613570  884264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:41:10.624948  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:41:10.769380  884264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:41:10.965596  884264 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:41:10.965735  884264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:41:10.970207  884264 start.go:564] Will wait 60s for crictl version
	I1120 21:41:10.970330  884264 ssh_runner.go:195] Run: which crictl
	I1120 21:41:10.974315  884264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:41:11.000434  884264 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:41:11.000593  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:41:11.038585  884264 ssh_runner.go:195] Run: crio --version
	I1120 21:41:11.076706  884264 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:41:11.079567  884264 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:41:11.082644  884264 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:41:11.085633  884264 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1120 21:41:11.088629  884264 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:41:11.108683  884264 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:41:11.114419  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:41:11.127176  884264 mustload.go:66] Loading cluster: ha-409851
	I1120 21:41:11.127431  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:11.127709  884264 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:41:11.147050  884264 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:41:11.147378  884264 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.5
	I1120 21:41:11.147394  884264 certs.go:195] generating shared ca certs ...
	I1120 21:41:11.147409  884264 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:41:11.147533  884264 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:41:11.147578  884264 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:41:11.147592  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:41:11.147607  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:41:11.147660  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:41:11.147683  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:41:11.147743  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:41:11.147786  884264 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:41:11.147795  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:41:11.147820  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:41:11.147843  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:41:11.147871  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:41:11.147915  884264 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:41:11.147959  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.147976  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.147989  884264 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.148010  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:41:11.176245  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:41:11.195856  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:41:11.214613  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:41:11.238690  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:41:11.260518  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:41:11.281726  884264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:41:11.301862  884264 ssh_runner.go:195] Run: openssl version
	I1120 21:41:11.308424  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.316198  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:41:11.324601  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.330531  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.330646  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:41:11.373994  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:41:11.382317  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.390537  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:41:11.399975  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.404118  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.404234  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:41:11.448070  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:41:11.457954  884264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.471564  884264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:41:11.480744  884264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.486391  884264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.486458  884264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:41:11.534970  884264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:41:11.543238  884264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:41:11.547092  884264 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:41:11.547139  884264 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 21:41:11.547290  884264 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:41:11.547367  884264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:41:11.555116  884264 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:41:11.555189  884264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 21:41:11.563262  884264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:41:11.578268  884264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:41:11.593301  884264 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:41:11.598486  884264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:41:11.609343  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:41:11.746115  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:41:11.760921  884264 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 21:41:11.761346  884264 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:41:11.764709  884264 out.go:179] * Verifying Kubernetes components...
	I1120 21:41:11.767650  884264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:41:11.914567  884264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:41:11.938460  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:41:11.938535  884264 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:41:11.938816  884264 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m04" to be "Ready" ...
	W1120 21:41:13.945651  884264 node_ready.go:57] node "ha-409851-m04" has "Ready":"Unknown" status (will retry)
	W1120 21:41:16.442900  884264 node_ready.go:57] node "ha-409851-m04" has "Ready":"Unknown" status (will retry)
	I1120 21:41:17.943857  884264 node_ready.go:49] node "ha-409851-m04" is "Ready"
	I1120 21:41:17.943887  884264 node_ready.go:38] duration metric: took 6.005051124s for node "ha-409851-m04" to be "Ready" ...
	I1120 21:41:17.943901  884264 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:41:17.943959  884264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:41:17.956954  884264 system_svc.go:56] duration metric: took 13.044338ms WaitForService to wait for kubelet
	I1120 21:41:17.956985  884264 kubeadm.go:587] duration metric: took 6.196020803s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:41:17.957003  884264 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:41:17.961298  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961332  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961343  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961348  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961353  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961357  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961361  884264 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:41:17.961364  884264 node_conditions.go:123] node cpu capacity is 2
	I1120 21:41:17.961369  884264 node_conditions.go:105] duration metric: took 4.361006ms to run NodePressure ...
	I1120 21:41:17.961388  884264 start.go:242] waiting for startup goroutines ...
	I1120 21:41:17.961412  884264 start.go:256] writing updated cluster config ...
	I1120 21:41:17.961738  884264 ssh_runner.go:195] Run: rm -f paused
	I1120 21:41:17.965714  884264 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:41:17.966209  884264 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:41:17.987930  884264 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:17.994206  884264 pod_ready.go:94] pod "coredns-66bc5c9577-pjk6c" is "Ready"
	I1120 21:41:17.994237  884264 pod_ready.go:86] duration metric: took 6.274933ms for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:17.994247  884264 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.000165  884264 pod_ready.go:94] pod "coredns-66bc5c9577-vfsp6" is "Ready"
	I1120 21:41:18.000193  884264 pod_ready.go:86] duration metric: took 5.93943ms for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.004504  884264 pod_ready.go:83] waiting for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.012659  884264 pod_ready.go:94] pod "etcd-ha-409851" is "Ready"
	I1120 21:41:18.012689  884264 pod_ready.go:86] duration metric: took 8.149311ms for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.012700  884264 pod_ready.go:83] waiting for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.020780  884264 pod_ready.go:94] pod "etcd-ha-409851-m02" is "Ready"
	I1120 21:41:18.020813  884264 pod_ready.go:86] duration metric: took 8.102492ms for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.020824  884264 pod_ready.go:83] waiting for pod "etcd-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:18.167216  884264 request.go:683] "Waited before sending request" delay="146.304273ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-409851-m03"
	I1120 21:41:18.366937  884264 request.go:683] "Waited before sending request" delay="196.339897ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:18.767349  884264 request.go:683] "Waited before sending request" delay="195.31892ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:19.167191  884264 request.go:683] "Waited before sending request" delay="142.259307ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	W1120 21:41:20.032402  884264 pod_ready.go:104] pod "etcd-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:22.528455  884264 pod_ready.go:104] pod "etcd-ha-409851-m03" is not "Ready", error: <nil>
	I1120 21:41:25.033882  884264 pod_ready.go:94] pod "etcd-ha-409851-m03" is "Ready"
	I1120 21:41:25.033912  884264 pod_ready.go:86] duration metric: took 7.013080383s for pod "etcd-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.040254  884264 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.053388  884264 pod_ready.go:94] pod "kube-apiserver-ha-409851" is "Ready"
	I1120 21:41:25.053485  884264 pod_ready.go:86] duration metric: took 13.116035ms for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.053512  884264 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.166598  884264 pod_ready.go:94] pod "kube-apiserver-ha-409851-m02" is "Ready"
	I1120 21:41:25.166678  884264 pod_ready.go:86] duration metric: took 113.122413ms for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.166704  884264 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.367416  884264 request.go:683] "Waited before sending request" delay="167.284948ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:25.394798  884264 pod_ready.go:94] pod "kube-apiserver-ha-409851-m03" is "Ready"
	I1120 21:41:25.394876  884264 pod_ready.go:86] duration metric: took 228.152279ms for pod "kube-apiserver-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.567359  884264 request.go:683] "Waited before sending request" delay="172.329236ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 21:41:25.572178  884264 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.768229  884264 request.go:683] "Waited before sending request" delay="195.205343ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851"
	I1120 21:41:25.966769  884264 request.go:683] "Waited before sending request" delay="194.270004ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:41:25.970209  884264 pod_ready.go:94] pod "kube-controller-manager-ha-409851" is "Ready"
	I1120 21:41:25.970236  884264 pod_ready.go:86] duration metric: took 398.02564ms for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:25.970246  884264 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:26.166647  884264 request.go:683] "Waited before sending request" delay="196.282354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m02"
	I1120 21:41:26.367492  884264 request.go:683] "Waited before sending request" delay="194.321944ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:41:26.370972  884264 pod_ready.go:94] pod "kube-controller-manager-ha-409851-m02" is "Ready"
	I1120 21:41:26.371028  884264 pod_ready.go:86] duration metric: took 400.775984ms for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:26.371038  884264 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:41:26.567360  884264 request.go:683] "Waited before sending request" delay="196.215941ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m03"
	I1120 21:41:26.766668  884264 request.go:683] "Waited before sending request" delay="195.346826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:26.966667  884264 request.go:683] "Waited before sending request" delay="95.147149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m03"
	I1120 21:41:27.167326  884264 request.go:683] "Waited before sending request" delay="196.326498ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:27.568613  884264 request.go:683] "Waited before sending request" delay="192.229084ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	I1120 21:41:27.966849  884264 request.go:683] "Waited before sending request" delay="91.23035ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m03"
	W1120 21:41:28.378730  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:30.379114  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:32.879033  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:35.379045  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:37.878241  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:40.378797  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:42.878559  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:45.379157  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:47.877869  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:49.881128  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:52.378869  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:54.878402  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:56.879168  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:41:59.386440  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:01.877608  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:04.379099  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:06.379677  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:08.385036  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:10.879345  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:13.378081  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:15.378210  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:17.878956  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:20.379087  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:22.392566  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:24.878081  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:26.878436  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:29.390304  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:31.877421  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:33.878206  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:35.878348  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:38.378256  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:40.378547  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:42.878117  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:44.878306  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:47.378856  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:49.379096  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:51.877443  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:53.877489  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:55.878600  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:42:57.878767  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:00.379377  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:02.878543  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:04.879548  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:07.377207  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:09.377567  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:11.379602  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:13.380062  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:15.878005  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:17.879034  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:20.380298  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:22.877944  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:24.878873  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:27.379047  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:29.380796  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:31.882322  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:34.378874  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:36.379099  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:38.379341  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:40.379731  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:42.877518  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:44.878086  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:46.878385  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:49.377786  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:51.378044  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:53.378300  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:55.878538  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:57.878669  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:43:59.882674  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:02.378956  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:04.879155  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:07.378530  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:09.878139  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:11.879593  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:14.377334  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:16.378277  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:18.381420  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:20.878229  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:22.878418  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:24.879069  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:27.377824  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:29.878048  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:31.878313  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:34.379581  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:36.877137  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:38.878394  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:40.878828  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:43.378176  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:45.878068  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:47.878425  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:49.878602  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:52.378582  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:54.878764  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:57.378027  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:44:59.381427  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:01.885697  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:04.378368  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:06.378472  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:08.389992  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:10.878206  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:13.377529  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:15.378711  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	W1120 21:45:17.877998  884264 pod_ready.go:104] pod "kube-controller-manager-ha-409851-m03" is not "Ready", error: <nil>
	I1120 21:45:17.966316  884264 pod_ready.go:86] duration metric: took 3m51.595241121s for pod "kube-controller-manager-ha-409851-m03" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:45:17.966353  884264 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 21:45:17.966368  884264 pod_ready.go:40] duration metric: took 4m0.000621775s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:45:17.969588  884264 out.go:203] 
	W1120 21:45:17.972643  884264 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 21:45:17.975633  884264 out.go:203] 
	
	
	==> CRI-O <==
	Nov 20 21:39:20 ha-409851 crio[669]: time="2025-11-20T21:39:20.249764629Z" level=info msg="Started container" PID=1236 containerID=e8fdabfa9a8b8aa91fe261bccd17d97129ae2a6b35505d477696e70753cdb6b7 description=kube-system/coredns-66bc5c9577-vfsp6/coredns id=74568f7d-6558-4ed8-91f1-68f1990c30b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=42485995e8876f34db7501ec41a59804a4ed9ae2116ef9d43f971450342dbf13
	Nov 20 21:39:49 ha-409851 conmon[1114]: conmon 21c3c6a6f55d40a36bf5 <ninfo>: container 1116 exited with status 1
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.632784625Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d0fd39ea-fa77-479d-b191-90503a9b28fb name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.633994314Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=21e2e835-8254-4694-a7aa-72fd4afb923a name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.6395122Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b15bfc0f-5310-494c-ac34-54e5ad11a7d8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.639630371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.644264636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.644498854Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/56db3570444b73799d70709773076eebd0890ab60259066f030c2205355ff337/merged/etc/passwd: no such file or directory"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.644520434Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/56db3570444b73799d70709773076eebd0890ab60259066f030c2205355ff337/merged/etc/group: no such file or directory"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.644774427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.679571848Z" level=info msg="Created container a4b68b4348d44ef2a900f09b3024dca5482c2a4de323b2dcae2bd89dbddd6f31: kube-system/storage-provisioner/storage-provisioner" id=b15bfc0f-5310-494c-ac34-54e5ad11a7d8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.680406648Z" level=info msg="Starting container: a4b68b4348d44ef2a900f09b3024dca5482c2a4de323b2dcae2bd89dbddd6f31" id=0c9c0665-a074-4f39-884e-1de941f1ab50 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:39:50 ha-409851 crio[669]: time="2025-11-20T21:39:50.682871835Z" level=info msg="Started container" PID=1415 containerID=a4b68b4348d44ef2a900f09b3024dca5482c2a4de323b2dcae2bd89dbddd6f31 description=kube-system/storage-provisioner/storage-provisioner id=0c9c0665-a074-4f39-884e-1de941f1ab50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1797f15844d53106be53db5c9d3fd3975292a67047660798629ddeadf54d83bb
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.268138893Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.332837674Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.333104484Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.333245335Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.378097346Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.378136591Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.378166631Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.386110048Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.386371451Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.386801782Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.391938158Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:40:00 ha-409851 crio[669]: time="2025-11-20T21:40:00.391990917Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	a4b68b4348d44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   1797f15844d53       storage-provisioner                 kube-system
	e8fdabfa9a8b8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   1                   42485995e8876       coredns-66bc5c9577-vfsp6            kube-system
	2d712803661e1       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   1                   4ca111ef4be62       busybox-7b57f96db7-mgvhj            default
	64d8739737a07       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   1                   2896dc90c65df       coredns-66bc5c9577-pjk6c            kube-system
	d0e16d539ff71       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 minutes ago       Running             kube-proxy                1                   4b383895c0d77       kube-proxy-4qqxh                    kube-system
	4a54d0081476a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 minutes ago       Running             kindnet-cni               1                   84b5e44666140       kindnet-7hmbf                       kube-system
	21c3c6a6f55d4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Exited              storage-provisioner       1                   1797f15844d53       storage-provisioner                 kube-system
	59d058da43a3d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Running             kube-controller-manager   2                   43b1b9d53686c       kube-controller-manager-ha-409851   kube-system
	386fda302f30f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            2                   ee26925111068       kube-apiserver-ha-409851            kube-system
	5c78de3db456c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      1                   88b09f2bac280       etcd-ha-409851                      kube-system
	be96e9e3ffb47       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            1                   8637dd7ca13e1       kube-scheduler-ha-409851            kube-system
	b40d2cfd438a8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            1                   ee26925111068       kube-apiserver-ha-409851            kube-system
	696b700dcb568       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Running             kube-vip                  0                   8537a8d9a1f65       kube-vip-ha-409851                  kube-system
	bbe2aa5c20be5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   1                   43b1b9d53686c       kube-controller-manager-ha-409851   kube-system
	
	
	==> coredns [64d8739737a078f7c00d99f881554e80533e8bfccd6b2cfc10dcc615416aee55] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55521 - 34960 "HINFO IN 1541082872970593707.3686323008074576518. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01805291s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e8fdabfa9a8b8aa91fe261bccd17d97129ae2a6b35505d477696e70753cdb6b7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37858 - 46462 "HINFO IN 2122825953572513070.5747140387215178598. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022452217s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-409851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_32_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:32:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:45:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:43:59 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:43:59 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:43:59 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:43:59 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-409851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                1f114e92-c1bf-4c10-9121-0a6c185877b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mgvhj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-pjk6c             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 coredns-66bc5c9577-vfsp6             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-ha-409851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-7hmbf                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-409851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-409851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-4qqxh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-409851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-409851                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m13s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)      kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-409851 status is now: NodeReady
	  Normal   RegisteredNode           10m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           7m37s                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   Starting                 6m57s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m57s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m57s (x8 over 6m57s)  kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m57s (x8 over 6m57s)  kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m57s (x8 over 6m57s)  kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           5m46s                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	
	
	Name:               ha-409851-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_33_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:45:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:45:32 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:45:32 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:45:32 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:45:32 +0000   Thu, 20 Nov 2025 21:34:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-409851-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3904cc8f-d8d1-4880-8dca-3fb5e1048dff
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hqh2f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-409851-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-56lr8                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-409851-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-409851-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-pz7vt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-409851-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-409851-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 6m17s                  kube-proxy       
	  Normal   Starting                 7m28s                  kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Warning  CgroupV1                 8m16s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m16s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m15s (x8 over 8m16s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m15s (x8 over 8m16s)  kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m15s (x8 over 8m16s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m37s                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   Starting                 6m54s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m54s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m54s (x8 over 6m54s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m54s (x8 over 6m54s)  kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m54s (x8 over 6m54s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           5m46s                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	
	
	Name:               ha-409851-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_35_59_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:45:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:43:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:43:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:43:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:43:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-409851-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                2c1b4976-2a70-4f78-8646-ed9804d613b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-snllw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kindnet-2d5r9               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m36s
	  kube-system                 kube-proxy-xnhl6            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m9s                   kube-proxy       
	  Normal   Starting                 9m33s                  kube-proxy       
	  Warning  CgroupV1                 9m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    9m36s (x3 over 9m36s)  kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m36s (x3 over 9m36s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  9m36s (x3 over 9m36s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m34s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           9m33s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           9m33s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeReady                8m54s                  kubelet          Node ha-409851-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m37s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           5m46s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeNotReady             5m24s                  node-controller  Node ha-409851-m04 status is now: NodeNotReady
	  Normal   Starting                 4m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m27s (x8 over 4m31s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m27s (x8 over 4m31s)  kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m27s (x8 over 4m31s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Nov20 19:51] overlayfs: idmapped layers are currently not supported
	[ +26.087379] overlayfs: idmapped layers are currently not supported
	[Nov20 19:52] overlayfs: idmapped layers are currently not supported
	[Nov20 19:53] overlayfs: idmapped layers are currently not supported
	[  +2.035111] overlayfs: idmapped layers are currently not supported
	[Nov20 19:54] overlayfs: idmapped layers are currently not supported
	[Nov20 19:55] overlayfs: idmapped layers are currently not supported
	[Nov20 19:56] overlayfs: idmapped layers are currently not supported
	[Nov20 19:57] overlayfs: idmapped layers are currently not supported
	[Nov20 19:58] overlayfs: idmapped layers are currently not supported
	[Nov20 19:59] overlayfs: idmapped layers are currently not supported
	[Nov20 20:04] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:08] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:11] overlayfs: idmapped layers are currently not supported
	[Nov20 21:17] overlayfs: idmapped layers are currently not supported
	[Nov20 21:18] overlayfs: idmapped layers are currently not supported
	[Nov20 21:32] overlayfs: idmapped layers are currently not supported
	[Nov20 21:33] overlayfs: idmapped layers are currently not supported
	[Nov20 21:34] overlayfs: idmapped layers are currently not supported
	[Nov20 21:36] overlayfs: idmapped layers are currently not supported
	[Nov20 21:37] overlayfs: idmapped layers are currently not supported
	[Nov20 21:38] overlayfs: idmapped layers are currently not supported
	[  +3.034217] overlayfs: idmapped layers are currently not supported
	[Nov20 21:39] overlayfs: idmapped layers are currently not supported
	[Nov20 21:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5c78de3db456c35c2eafd8be0e59c965664f006cb3e9b19c4d9b05b81ab079fc] <==
	{"level":"info","ts":"2025-11-20T21:40:58.400223Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"13577a22751ca4e7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-20T21:40:58.400266Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:40:58.413381Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:40:58.415661Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"warn","ts":"2025-11-20T21:40:59.570209Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"13577a22751ca4e7","rtt":"10.069518ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:40:59.570288Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"13577a22751ca4e7","rtt":"2.765126ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T21:45:25.540144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:43964","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:45:25.670113Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(9917626278389854547 12593026477526642892)"}
	{"level":"info","ts":"2025-11-20T21:45:25.676695Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"13577a22751ca4e7","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T21:45:25.676837Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"13577a22751ca4e7"}
	{"level":"warn","ts":"2025-11-20T21:45:25.677041Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:45:25.677276Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"13577a22751ca4e7"}
	{"level":"warn","ts":"2025-11-20T21:45:25.677359Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:45:25.677511Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:45:25.677590Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"warn","ts":"2025-11-20T21:45:25.677826Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7","error":"context canceled"}
	{"level":"warn","ts":"2025-11-20T21:45:25.677912Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"13577a22751ca4e7","error":"failed to read 13577a22751ca4e7 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-11-20T21:45:25.677968Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"warn","ts":"2025-11-20T21:45:25.678208Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7","error":"context canceled"}
	{"level":"info","ts":"2025-11-20T21:45:25.678280Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:45:25.682239Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:45:25.682328Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"13577a22751ca4e7"}
	{"level":"info","ts":"2025-11-20T21:45:25.682399Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"13577a22751ca4e7"}
	{"level":"warn","ts":"2025-11-20T21:45:25.695341Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"13577a22751ca4e7"}
	{"level":"warn","ts":"2025-11-20T21:45:25.707175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:50244","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:45:35 up  4:27,  0 user,  load average: 0.71, 1.02, 1.35
	Linux ha-409851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a54d0081476a29dc91465df41cda7c5c9c2cb8309fda4632546728f61e59cf6] <==
	I1120 21:45:00.262720       1 main.go:324] Node ha-409851-m03 has CIDR [10.244.2.0/24] 
	I1120 21:45:00.263453       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:45:00.263542       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:45:10.265604       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:45:10.265641       1 main.go:301] handling current node
	I1120 21:45:10.265658       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:45:10.265666       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:45:10.265822       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 21:45:10.265836       1 main.go:324] Node ha-409851-m03 has CIDR [10.244.2.0/24] 
	I1120 21:45:10.265893       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:45:10.265905       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:45:20.261614       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:45:20.261650       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:45:20.261783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:45:20.261799       1 main.go:301] handling current node
	I1120 21:45:20.261815       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:45:20.261822       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:45:20.261880       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 21:45:20.261891       1 main.go:324] Node ha-409851-m03 has CIDR [10.244.2.0/24] 
	I1120 21:45:30.262319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:45:30.262356       1 main.go:301] handling current node
	I1120 21:45:30.262373       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:45:30.262379       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:45:30.262544       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:45:30.262555       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [386fda302f30f7ebb1d4d339166cc1ec54dfa445272705792165e6163d57744c] <==
	I1120 21:39:13.787909       1 controller.go:90] Starting OpenAPI V3 controller
	I1120 21:39:13.788159       1 naming_controller.go:299] Starting NamingConditionController
	I1120 21:39:13.788226       1 establishing_controller.go:81] Starting EstablishingController
	I1120 21:39:13.788279       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1120 21:39:13.788325       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1120 21:39:13.788371       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1120 21:39:13.913965       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:39:13.940203       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:39:13.945914       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:39:13.946009       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:39:13.946800       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:39:13.946821       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:39:13.948219       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:39:13.948249       1 policy_source.go:240] refreshing policies
	W1120 21:39:13.950492       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1120 21:39:13.951945       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:39:13.969005       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1120 21:39:13.978030       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1120 21:39:13.992776       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:39:14.268877       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1120 21:39:16.440486       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1120 21:39:19.450109       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:39:22.002455       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:39:22.120006       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:39:22.146838       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [b40d2cfd438a8dc3a5f89de00510928701b9ef1887f2f4f9055a3978ea2197fa] <==
	I1120 21:38:39.115874       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1120 21:38:41.567494       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1120 21:38:41.567533       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1120 21:38:41.567541       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1120 21:38:41.567547       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1120 21:38:41.567551       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1120 21:38:41.567556       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1120 21:38:41.567561       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1120 21:38:41.567565       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1120 21:38:41.567569       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1120 21:38:41.567574       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1120 21:38:41.567578       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1120 21:38:41.567582       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1120 21:38:41.597999       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1120 21:38:41.599390       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1120 21:38:41.607075       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1120 21:38:41.623950       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:38:41.639375       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1120 21:38:41.639482       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1120 21:38:41.639814       1 instance.go:239] Using reconciler: lease
	W1120 21:38:41.641190       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1120 21:39:01.597873       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1120 21:39:01.599901       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W1120 21:39:01.641307       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1120 21:39:01.641306       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [59d058da43a3deb02cebe99d92bd9fea5f53c1d0e1d4781459318e9f5ec8e02b] <==
	I1120 21:39:21.928513       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:39:21.928883       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 21:39:21.928907       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:39:21.934278       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:39:21.934649       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:39:21.934706       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:39:21.943813       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:39:21.943872       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:39:21.944918       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:39:21.944985       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:39:21.945073       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851"
	I1120 21:39:21.945122       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m02"
	I1120 21:39:21.945144       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m03"
	I1120 21:39:21.945173       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m04"
	I1120 21:39:21.945196       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:39:21.946153       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 21:39:21.955276       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:39:21.955492       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:39:21.955493       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:39:21.969284       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:39:21.977060       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:41:17.714744       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-409851-m04"
	I1120 21:45:27.967608       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-409851-m04"
	E1120 21:45:28.040885       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-409851-m03\", UID:\"6ff645ff-f0b8-46a8-8c68-780cff2a5099\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mut
ex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-409851-m03\", UID:\"9278fa33-da79-4d98-a8f8-6efc4bee9dd6\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-409851-m03\" not found" logger="UnhandledError"
	E1120 21:45:28.079946       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-409851-m03\", UID:\"15813d81-8cc3-42e3-a046-f27e713817a9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noC
opy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-409851-m03\", UID:\"9278fa33-da79-4d98-a8f8-6efc4bee9dd6\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-409851-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [bbe2aa5c20be55307484a6dc5e0cf27f1adb8b5e2bad7448657394d0153a3e84] <==
	I1120 21:38:41.548098       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:38:44.614759       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1120 21:38:44.618354       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:38:44.620563       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1120 21:38:44.622306       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1120 21:38:44.623227       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1120 21:38:44.624940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 21:39:13.636467       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [d0e16d539ff71abab806825801bb28f583fae27f1d711dac09b9ccaed9935625] <==
	I1120 21:39:19.834696       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:39:20.701909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:39:20.836637       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:39:20.836803       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 21:39:20.836987       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:39:21.023295       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:39:21.077884       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:39:21.318794       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:39:21.319212       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:39:21.327635       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:39:21.329660       1 config.go:200] "Starting service config controller"
	I1120 21:39:21.329733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:39:21.329805       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:39:21.329839       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:39:21.329876       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:39:21.329902       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:39:21.334052       1 config.go:309] "Starting node config controller"
	I1120 21:39:21.334154       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:39:21.334188       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:39:21.430611       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:39:21.430705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:39:21.430721       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [be96e9e3ffb4708dccf24988f485136e1039f591a2e9c93edef5d830431fa080] <==
	I1120 21:39:12.712397       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1120 21:39:13.686244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:39:13.686326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:39:13.686370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:39:13.686413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:39:13.686453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:39:13.686492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:39:13.686533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:39:13.686596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:39:13.686639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:39:13.686681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:39:13.686732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:39:13.686764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:39:13.686799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:39:13.686879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:39:13.686923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:39:13.687046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:39:13.695222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:39:13.730211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:39:13.852932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1120 21:39:15.213191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1120 21:45:22.238748       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-snllw\": pod busybox-7b57f96db7-snllw is already assigned to node \"ha-409851-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-snllw" node="ha-409851-m04"
	E1120 21:45:22.269121       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 54e5ea80-1a27-4789-b411-74d050a9788c(default/busybox-7b57f96db7-snllw) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-snllw"
	E1120 21:45:22.270473       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-snllw\": pod busybox-7b57f96db7-snllw is already assigned to node \"ha-409851-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-snllw"
	I1120 21:45:22.271396       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-snllw" node="ha-409851-m04"
	
	
	==> kubelet <==
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.244029     809 apiserver.go:52] "Watching apiserver"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.253241     809 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-409851" podUID="714ee0ad-584f-4bd3-b031-cc6e2485512c"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.308093     809 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-409851"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.308346     809 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-409851"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.334825     809 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 21:39:19 ha-409851 kubelet[809]: E1120 21:39:19.335285     809 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-vip-ha-409851\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="6f4588d400318593d47cec16914af85c" pod="kube-system/kube-vip-ha-409851"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.413889     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f7683fa-0199-444f-bcf4-42666203c1fa-xtables-lock\") pod \"kube-proxy-4qqxh\" (UID: \"2f7683fa-0199-444f-bcf4-42666203c1fa\") " pod="kube-system/kube-proxy-4qqxh"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414105     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f7683fa-0199-444f-bcf4-42666203c1fa-lib-modules\") pod \"kube-proxy-4qqxh\" (UID: \"2f7683fa-0199-444f-bcf4-42666203c1fa\") " pod="kube-system/kube-proxy-4qqxh"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414260     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/562945a4-84ec-46c8-b77e-abdd9d577c9c-xtables-lock\") pod \"kindnet-7hmbf\" (UID: \"562945a4-84ec-46c8-b77e-abdd9d577c9c\") " pod="kube-system/kindnet-7hmbf"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414418     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/562945a4-84ec-46c8-b77e-abdd9d577c9c-cni-cfg\") pod \"kindnet-7hmbf\" (UID: \"562945a4-84ec-46c8-b77e-abdd9d577c9c\") " pod="kube-system/kindnet-7hmbf"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414532     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/562945a4-84ec-46c8-b77e-abdd9d577c9c-lib-modules\") pod \"kindnet-7hmbf\" (UID: \"562945a4-84ec-46c8-b77e-abdd9d577c9c\") " pod="kube-system/kindnet-7hmbf"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.414721     809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/349c85dc-6341-43ab-b388-8734d72e3040-tmp\") pod \"storage-provisioner\" (UID: \"349c85dc-6341-43ab-b388-8734d72e3040\") " pod="kube-system/storage-provisioner"
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.538354     809 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:39:19 ha-409851 kubelet[809]: W1120 21:39:19.595455     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-1797f15844d53106be53db5c9d3fd3975292a67047660798629ddeadf54d83bb WatchSource:0}: Error finding container 1797f15844d53106be53db5c9d3fd3975292a67047660798629ddeadf54d83bb: Status 404 returned error can't find the container with id 1797f15844d53106be53db5c9d3fd3975292a67047660798629ddeadf54d83bb
	Nov 20 21:39:19 ha-409851 kubelet[809]: W1120 21:39:19.619580     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-84b5e4466614067d6d89104ea9dd7c5ccc7fe8930c1a9f35a249ed3c331e30ea WatchSource:0}: Error finding container 84b5e4466614067d6d89104ea9dd7c5ccc7fe8930c1a9f35a249ed3c331e30ea: Status 404 returned error can't find the container with id 84b5e4466614067d6d89104ea9dd7c5ccc7fe8930c1a9f35a249ed3c331e30ea
	Nov 20 21:39:19 ha-409851 kubelet[809]: W1120 21:39:19.914004     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-2896dc90c65dfca1af86e02c677c9e2879bd0ad714d3c947dfa45ff146f61367 WatchSource:0}: Error finding container 2896dc90c65dfca1af86e02c677c9e2879bd0ad714d3c947dfa45ff146f61367: Status 404 returned error can't find the container with id 2896dc90c65dfca1af86e02c677c9e2879bd0ad714d3c947dfa45ff146f61367
	Nov 20 21:39:19 ha-409851 kubelet[809]: I1120 21:39:19.931964     809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-409851" podStartSLOduration=0.931934967 podStartE2EDuration="931.934967ms" podCreationTimestamp="2025-11-20 21:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:39:19.931574117 +0000 UTC m=+41.856119838" watchObservedRunningTime="2025-11-20 21:39:19.931934967 +0000 UTC m=+41.856480688"
	Nov 20 21:39:19 ha-409851 kubelet[809]: W1120 21:39:19.976075     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-4ca111ef4be62d78c7a1ed21e6a44df07dbf900d08c75258fb1b742e4a65334a WatchSource:0}: Error finding container 4ca111ef4be62d78c7a1ed21e6a44df07dbf900d08c75258fb1b742e4a65334a: Status 404 returned error can't find the container with id 4ca111ef4be62d78c7a1ed21e6a44df07dbf900d08c75258fb1b742e4a65334a
	Nov 20 21:39:20 ha-409851 kubelet[809]: W1120 21:39:20.042029     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-42485995e8876f34db7501ec41a59804a4ed9ae2116ef9d43f971450342dbf13 WatchSource:0}: Error finding container 42485995e8876f34db7501ec41a59804a4ed9ae2116ef9d43f971450342dbf13: Status 404 returned error can't find the container with id 42485995e8876f34db7501ec41a59804a4ed9ae2116ef9d43f971450342dbf13
	Nov 20 21:39:20 ha-409851 kubelet[809]: I1120 21:39:20.341590     809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ab4d253eaf1d40f90b8f9740737427" path="/var/lib/kubelet/pods/50ab4d253eaf1d40f90b8f9740737427/volumes"
	Nov 20 21:39:38 ha-409851 kubelet[809]: E1120 21:39:38.225519     809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6\": container with ID starting with 637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6 not found: ID does not exist" containerID="637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6"
	Nov 20 21:39:38 ha-409851 kubelet[809]: I1120 21:39:38.226038     809 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6" err="rpc error: code = NotFound desc = could not find container \"637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6\": container with ID starting with 637206e4d528c8fef7559376038f308ee752e5211a8890e33dc3ea16b654e0e6 not found: ID does not exist"
	Nov 20 21:39:38 ha-409851 kubelet[809]: E1120 21:39:38.226672     809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b\": container with ID starting with e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b not found: ID does not exist" containerID="e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b"
	Nov 20 21:39:38 ha-409851 kubelet[809]: I1120 21:39:38.226841     809 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b" err="rpc error: code = NotFound desc = could not find container \"e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b\": container with ID starting with e14397827bdf85b8d83d2bcf9ec8d1f88e039180b92e2b4ca64bd53c98a6441b not found: ID does not exist"
	Nov 20 21:39:50 ha-409851 kubelet[809]: I1120 21:39:50.632093     809 scope.go:117] "RemoveContainer" containerID="21c3c6a6f55d40a36bf5628afc1fc7cfc6b87251643b9599eab6ab7a2a06740d"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-409851 -n ha-409851
helpers_test.go:269: (dbg) Run:  kubectl --context ha-409851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (369.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1120 21:46:15.820123  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:38.577770  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:15.819693  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:41.649762  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409851 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m6.674074335s)

                                                
                                                
-- stdout --
	* [ha-409851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-409851" primary control-plane node in "ha-409851" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	* Enabled addons: 
	
	* Starting "ha-409851-m02" control-plane node in "ha-409851" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-409851-m04" worker node in "ha-409851" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:46:12.791438  893814 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:46:12.791547  893814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.791556  893814 out.go:374] Setting ErrFile to fd 2...
	I1120 21:46:12.791561  893814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.791812  893814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:46:12.792153  893814 out.go:368] Setting JSON to false
	I1120 21:46:12.792975  893814 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16098,"bootTime":1763659075,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:46:12.793039  893814 start.go:143] virtualization:  
	I1120 21:46:12.796567  893814 out.go:179] * [ha-409851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:46:12.800274  893814 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:46:12.800333  893814 notify.go:221] Checking for updates...
	I1120 21:46:12.805930  893814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:46:12.808740  893814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:12.811665  893814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:46:12.814590  893814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:46:12.817489  893814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:46:12.820869  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:12.821456  893814 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:46:12.854504  893814 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:46:12.854629  893814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:46:12.916245  893814 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:46:12.907017867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:46:12.916354  893814 docker.go:319] overlay module found
	I1120 21:46:12.921281  893814 out.go:179] * Using the docker driver based on existing profile
	I1120 21:46:12.924086  893814 start.go:309] selected driver: docker
	I1120 21:46:12.924103  893814 start.go:930] validating driver "docker" against &{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:12.924235  893814 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:46:12.924335  893814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:46:12.982109  893814 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:46:12.972838498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:46:12.982542  893814 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:46:12.982605  893814 cni.go:84] Creating CNI manager for ""
	I1120 21:46:12.982654  893814 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1120 21:46:12.982705  893814 start.go:353] cluster config:
	{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:12.987881  893814 out.go:179] * Starting "ha-409851" primary control-plane node in "ha-409851" cluster
	I1120 21:46:12.990803  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:46:12.993745  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:46:12.996606  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:12.996692  893814 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:46:12.996690  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:46:12.996704  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:46:12.996891  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:46:12.996899  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:46:12.997043  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:13.017636  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:46:13.017661  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:46:13.017680  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:46:13.017708  893814 start.go:360] acquireMachinesLock for ha-409851: {Name:mk8d4d263fd846febb903e54335147f9d639d302 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:46:13.017784  893814 start.go:364] duration metric: took 50.068µs to acquireMachinesLock for "ha-409851"
	I1120 21:46:13.017814  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:46:13.017825  893814 fix.go:54] fixHost starting: 
	I1120 21:46:13.018084  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:46:13.035594  893814 fix.go:112] recreateIfNeeded on ha-409851: state=Stopped err=<nil>
	W1120 21:46:13.035627  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:46:13.038907  893814 out.go:252] * Restarting existing docker container for "ha-409851" ...
	I1120 21:46:13.039022  893814 cli_runner.go:164] Run: docker start ha-409851
	I1120 21:46:13.304460  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:46:13.328120  893814 kic.go:430] container "ha-409851" state is running.
	I1120 21:46:13.328719  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:13.354344  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:13.354582  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:46:13.354651  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:13.379550  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:13.379870  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:13.379890  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:46:13.380728  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:46:16.522806  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:46:16.522896  893814 ubuntu.go:182] provisioning hostname "ha-409851"
	I1120 21:46:16.523007  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.540197  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:16.540514  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:16.540535  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851 && echo "ha-409851" | sudo tee /etc/hostname
	I1120 21:46:16.694351  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:46:16.694434  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.711779  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:16.712102  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:16.712124  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:46:16.851168  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:46:16.851196  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:46:16.851221  893814 ubuntu.go:190] setting up certificates
	I1120 21:46:16.851230  893814 provision.go:84] configureAuth start
	I1120 21:46:16.851299  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:16.868945  893814 provision.go:143] copyHostCerts
	I1120 21:46:16.868995  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:16.869035  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:46:16.869055  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:16.869140  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:46:16.869236  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:16.869258  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:46:16.869266  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:16.869304  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:46:16.869353  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:16.869373  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:46:16.869384  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:16.869416  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:46:16.869469  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851 san=[127.0.0.1 192.168.49.2 ha-409851 localhost minikube]
	I1120 21:46:16.952356  893814 provision.go:177] copyRemoteCerts
	I1120 21:46:16.952425  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:46:16.952478  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.973308  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.074564  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:46:17.074634  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1120 21:46:17.091858  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:46:17.091917  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:46:17.109606  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:46:17.109674  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:46:17.127878  893814 provision.go:87] duration metric: took 276.622438ms to configureAuth
	I1120 21:46:17.127903  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:46:17.128138  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:17.128246  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.145230  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:17.145555  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:17.145568  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:46:17.521503  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:46:17.521523  893814 machine.go:97] duration metric: took 4.166931199s to provisionDockerMachine
	I1120 21:46:17.521535  893814 start.go:293] postStartSetup for "ha-409851" (driver="docker")
	I1120 21:46:17.521545  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:46:17.521607  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:46:17.521648  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.543040  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.642924  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:46:17.646266  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:46:17.646295  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:46:17.646306  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:46:17.646362  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:46:17.646441  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:46:17.646453  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:46:17.646557  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:46:17.654029  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:17.671759  893814 start.go:296] duration metric: took 150.208491ms for postStartSetup
	I1120 21:46:17.671861  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:46:17.671903  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.688970  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.788149  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:46:17.792950  893814 fix.go:56] duration metric: took 4.775117155s for fixHost
	I1120 21:46:17.792985  893814 start.go:83] releasing machines lock for "ha-409851", held for 4.775188491s
	I1120 21:46:17.793094  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:17.811172  893814 ssh_runner.go:195] Run: cat /version.json
	I1120 21:46:17.811227  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.811496  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:46:17.811569  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.830577  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.847514  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:18.032855  893814 ssh_runner.go:195] Run: systemctl --version
	I1120 21:46:18.039676  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:46:18.084631  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:46:18.089315  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:46:18.089397  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:46:18.097880  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:46:18.097906  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:46:18.097957  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:46:18.098046  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:46:18.113581  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:46:18.127110  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:46:18.127198  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:46:18.143327  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:46:18.156859  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:46:18.285846  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:46:18.406177  893814 docker.go:234] disabling docker service ...
	I1120 21:46:18.406303  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:46:18.422621  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:46:18.436488  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:46:18.557150  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:46:18.669376  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:46:18.683020  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:46:18.696701  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:46:18.696805  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.705450  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:46:18.705544  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.714727  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.724078  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.733001  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:46:18.741246  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.750057  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.758559  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.767154  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:46:18.774675  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:46:18.782542  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:18.908183  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:46:19.102647  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:46:19.102768  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:46:19.107633  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:46:19.107713  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:46:19.112020  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:46:19.139825  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:46:19.139929  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:46:19.171276  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:46:19.211415  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:46:19.214291  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:46:19.231738  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:46:19.235755  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:46:19.246147  893814 kubeadm.go:884] updating cluster {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:46:19.246304  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:19.246367  893814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:46:19.290538  893814 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:46:19.290565  893814 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:46:19.290626  893814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:46:19.316155  893814 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:46:19.316180  893814 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:46:19.316189  893814 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 21:46:19.316303  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:46:19.316387  893814 ssh_runner.go:195] Run: crio config
	I1120 21:46:19.371279  893814 cni.go:84] Creating CNI manager for ""
	I1120 21:46:19.371300  893814 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1120 21:46:19.371316  893814 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:46:19.371339  893814 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-409851 NodeName:ha-409851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:46:19.371462  893814 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-409851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:46:19.371484  893814 kube-vip.go:115] generating kube-vip config ...
	I1120 21:46:19.371537  893814 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:46:19.384116  893814 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:46:19.384238  893814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:46:19.384326  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:46:19.392356  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:46:19.392430  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 21:46:19.400069  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 21:46:19.413705  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:46:19.427554  893814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1120 21:46:19.440926  893814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:46:19.454200  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:46:19.457772  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:46:19.467840  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:19.582412  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:46:19.599710  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.2
	I1120 21:46:19.599791  893814 certs.go:195] generating shared ca certs ...
	I1120 21:46:19.599822  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.599996  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:46:19.600074  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:46:19.600106  893814 certs.go:257] generating profile certs ...
	I1120 21:46:19.600223  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:46:19.600276  893814 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee
	I1120 21:46:19.600310  893814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1120 21:46:19.750831  893814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee ...
	I1120 21:46:19.750905  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee: {Name:mk539a3dda8a36b48c6c5c30b7491f9043b065a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.751146  893814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee ...
	I1120 21:46:19.751277  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee: {Name:mk851c2f98f193e8bb483e43db8a657c69eae8b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.751416  893814 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt
	I1120 21:46:19.751615  893814 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key
	I1120 21:46:19.751796  893814 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:46:19.751838  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:46:19.751886  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:46:19.751918  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:46:19.751961  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:46:19.751995  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:46:19.752027  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:46:19.752070  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:46:19.752104  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:46:19.752174  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:46:19.752242  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:46:19.752268  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:46:19.752317  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:46:19.752367  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:46:19.752427  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:46:19.752538  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:19.752606  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:46:19.752639  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:46:19.752686  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:19.753263  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:46:19.782536  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:46:19.807080  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:46:19.842006  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:46:19.863690  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:46:19.882351  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:46:19.902131  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:46:19.923247  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:46:19.943308  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:46:19.961281  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:46:19.981823  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:46:19.999815  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:46:20.019398  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:46:20.026511  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.035530  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:46:20.043827  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.048146  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.048252  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.090685  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:46:20.099210  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.107103  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:46:20.115263  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.119310  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.119405  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.160958  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:46:20.168922  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.176806  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:46:20.184554  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.188641  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.188742  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.232577  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:46:20.246815  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:46:20.252000  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:46:20.307993  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:46:20.361067  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:46:20.404267  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:46:20.471141  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:46:20.556774  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:46:20.620581  893814 kubeadm.go:401] StartCluster: {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:20.620772  893814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:46:20.620872  893814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:46:20.672595  893814 cri.go:89] found id: "e758e4601a79aacd9dd015c82692281d156d9100d6bc2fb480b11d07ff223294"
	I1120 21:46:20.672675  893814 cri.go:89] found id: "bf7fd293f188a4c3116512ca8739e3ae57f6b6ac6e8e5e7a7e493804caba0ede"
	I1120 21:46:20.672702  893814 cri.go:89] found id: "29879cb03dd0a43326e4e6e94a9bec4cf49f8356cb3cf208c0a562ed783bb2de"
	I1120 21:46:20.672723  893814 cri.go:89] found id: "d2a9e01261d927422239ac6d8aae4c4810c85777bd6fc37ddc5126a51deff4dd"
	I1120 21:46:20.672755  893814 cri.go:89] found id: "538778f2e99f0831684f744a21c231b476e72c223d7af53829698631c58b4b38"
	I1120 21:46:20.672779  893814 cri.go:89] found id: ""
	I1120 21:46:20.672864  893814 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:46:20.692788  893814 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:46:20Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:46:20.692935  893814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:46:20.704191  893814 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:46:20.704251  893814 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:46:20.704341  893814 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:46:20.715485  893814 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:46:20.716011  893814 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-409851" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:20.716179  893814 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "ha-409851" cluster setting kubeconfig missing "ha-409851" context setting]
	I1120 21:46:20.716543  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.717160  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:46:20.717985  893814 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 21:46:20.718059  893814 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 21:46:20.718131  893814 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 21:46:20.718157  893814 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 21:46:20.718177  893814 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 21:46:20.718212  893814 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 21:46:20.730102  893814 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:46:20.744141  893814 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 21:46:20.744165  893814 kubeadm.go:602] duration metric: took 39.885836ms to restartPrimaryControlPlane
	I1120 21:46:20.744174  893814 kubeadm.go:403] duration metric: took 123.603025ms to StartCluster
	I1120 21:46:20.744191  893814 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.744256  893814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:20.744888  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.745066  893814 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:46:20.745084  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:46:20.745100  893814 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:46:20.745725  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:20.751118  893814 out.go:179] * Enabled addons: 
	I1120 21:46:20.754039  893814 addons.go:515] duration metric: took 8.930638ms for enable addons: enabled=[]
	I1120 21:46:20.754080  893814 start.go:247] waiting for cluster config update ...
	I1120 21:46:20.754090  893814 start.go:256] writing updated cluster config ...
	I1120 21:46:20.757337  893814 out.go:203] 
	I1120 21:46:20.760537  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:20.760717  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:20.764214  893814 out.go:179] * Starting "ha-409851-m02" control-plane node in "ha-409851" cluster
	I1120 21:46:20.767355  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:46:20.770446  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:46:20.773470  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:20.773563  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:46:20.773537  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:46:20.773902  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:46:20.773939  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:46:20.774117  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:20.801641  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:46:20.801660  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:46:20.801671  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:46:20.801698  893814 start.go:360] acquireMachinesLock for ha-409851-m02: {Name:mka809540f7c511f76e83dac3b1218011243fbec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:46:20.801748  893814 start.go:364] duration metric: took 35.446µs to acquireMachinesLock for "ha-409851-m02"
	I1120 21:46:20.801767  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:46:20.801774  893814 fix.go:54] fixHost starting: m02
	I1120 21:46:20.802025  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:46:20.830914  893814 fix.go:112] recreateIfNeeded on ha-409851-m02: state=Stopped err=<nil>
	W1120 21:46:20.830963  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:46:20.835462  893814 out.go:252] * Restarting existing docker container for "ha-409851-m02" ...
	I1120 21:46:20.835556  893814 cli_runner.go:164] Run: docker start ha-409851-m02
	I1120 21:46:21.218686  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:46:21.252602  893814 kic.go:430] container "ha-409851-m02" state is running.
	I1120 21:46:21.252990  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:21.287738  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:21.288165  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:46:21.288242  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:21.321625  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:21.321986  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:21.322003  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:46:21.324132  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50986->127.0.0.1:33942: read: connection reset by peer
	I1120 21:46:24.541429  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:46:24.541464  893814 ubuntu.go:182] provisioning hostname "ha-409851-m02"
	I1120 21:46:24.541536  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:24.591123  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:24.591436  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:24.591454  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m02 && echo "ha-409851-m02" | sudo tee /etc/hostname
	I1120 21:46:24.829670  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:46:24.830508  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:24.868680  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:24.868993  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:24.869016  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:46:25.086415  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:46:25.086446  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:46:25.086467  893814 ubuntu.go:190] setting up certificates
	I1120 21:46:25.086477  893814 provision.go:84] configureAuth start
	I1120 21:46:25.086545  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:25.116440  893814 provision.go:143] copyHostCerts
	I1120 21:46:25.116492  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:25.116528  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:46:25.116540  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:25.116614  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:46:25.116704  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:25.116727  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:46:25.116737  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:25.116766  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:46:25.116814  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:25.116842  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:46:25.116852  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:25.116880  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:46:25.116934  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m02 san=[127.0.0.1 192.168.49.3 ha-409851-m02 localhost minikube]
	I1120 21:46:25.299085  893814 provision.go:177] copyRemoteCerts
	I1120 21:46:25.299152  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:46:25.299205  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:25.334304  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:25.454142  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:46:25.454207  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:46:25.519452  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:46:25.519523  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:46:25.579807  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:46:25.579872  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:46:25.625625  893814 provision.go:87] duration metric: took 539.133654ms to configureAuth
	I1120 21:46:25.625654  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:46:25.625881  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:25.626005  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:25.676739  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:25.677055  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:25.677078  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:46:27.313592  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:46:27.313611  893814 machine.go:97] duration metric: took 6.025425517s to provisionDockerMachine
	I1120 21:46:27.313622  893814 start.go:293] postStartSetup for "ha-409851-m02" (driver="docker")
	I1120 21:46:27.313633  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:46:27.313709  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:46:27.313760  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.348890  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.472301  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:46:27.476588  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:46:27.476614  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:46:27.476626  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:46:27.476683  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:46:27.476757  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:46:27.476765  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:46:27.476876  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:46:27.485018  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:27.504498  893814 start.go:296] duration metric: took 190.860481ms for postStartSetup
	I1120 21:46:27.504660  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:46:27.504741  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.528788  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.644723  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:46:27.649843  893814 fix.go:56] duration metric: took 6.84806345s for fixHost
	I1120 21:46:27.649868  893814 start.go:83] releasing machines lock for "ha-409851-m02", held for 6.848112263s
	I1120 21:46:27.649945  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:27.674188  893814 out.go:179] * Found network options:
	I1120 21:46:27.677242  893814 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 21:46:27.680124  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:46:27.680168  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:46:27.680244  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:46:27.680247  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:46:27.680288  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.680307  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.700610  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.707137  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.925105  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:46:28.059572  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:46:28.059657  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:46:28.074369  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:46:28.074399  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:46:28.074432  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:46:28.074499  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:46:28.097384  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:46:28.115088  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:46:28.115159  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:46:28.145681  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:46:28.169842  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:46:28.395806  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:46:28.633186  893814 docker.go:234] disabling docker service ...
	I1120 21:46:28.633295  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:46:28.653639  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:46:28.673051  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:46:28.911134  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:46:29.139790  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:46:29.165309  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:46:29.189385  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:46:29.189499  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.203577  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:46:29.203723  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.219781  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.229964  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.247451  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:46:29.257774  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.270135  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.279629  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.289968  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:46:29.299527  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:46:29.308385  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:29.625535  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:47:59.900415  893814 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.274799929s)
	I1120 21:47:59.900439  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:47:59.900493  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:47:59.904340  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:47:59.904408  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:47:59.908141  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:47:59.934786  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:47:59.934878  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:47:59.970641  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:00.031101  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:48:00.052822  893814 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:48:00.070551  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:48:00.144325  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:48:00.158851  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:00.193319  893814 mustload.go:66] Loading cluster: ha-409851
	I1120 21:48:00.193638  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:00.193952  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:48:00.257208  893814 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:48:00.257542  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.3
	I1120 21:48:00.257559  893814 certs.go:195] generating shared ca certs ...
	I1120 21:48:00.257575  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:48:00.257700  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:48:00.257744  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:48:00.257751  893814 certs.go:257] generating profile certs ...
	I1120 21:48:00.257839  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:48:00.257904  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.e3c52656
	I1120 21:48:00.257941  893814 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:48:00.257951  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:48:00.257964  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:48:00.257975  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:48:00.257985  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:48:00.257997  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:48:00.258009  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:48:00.258021  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:48:00.258032  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:48:00.258087  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:48:00.258118  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:48:00.258141  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:48:00.258171  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:48:00.258206  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:48:00.258229  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:48:00.258276  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:00.258311  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.258325  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.258342  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:00.258416  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:48:00.286658  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:48:00.411419  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:48:00.416825  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:48:00.429106  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:48:00.434141  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:48:00.446859  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:48:00.451932  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:48:00.463743  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:48:00.468370  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:48:00.478967  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:48:00.483728  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:48:00.495516  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:48:00.499782  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:48:00.510022  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:48:00.533411  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:48:00.557609  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:48:00.579641  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:48:00.599346  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:48:00.622831  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:48:00.643496  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:48:00.662349  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:48:00.681048  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:48:00.700389  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:48:00.721204  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:48:00.741591  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:48:00.755291  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:48:00.769986  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:48:00.784853  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:48:00.798923  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:48:00.812361  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:48:00.826911  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:48:00.842313  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:48:00.849394  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.857032  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:48:00.864532  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.868398  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.868472  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.910592  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:48:00.918458  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.926263  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:48:00.934304  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.938442  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.938531  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.987101  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:48:00.995288  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.003879  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:48:01.012703  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.016823  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.016924  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.059233  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:48:01.068459  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:48:01.072670  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:48:01.115135  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:48:01.157870  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:48:01.200156  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:48:01.244244  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:48:01.286456  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:48:01.333479  893814 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 21:48:01.333592  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:48:01.333632  893814 kube-vip.go:115] generating kube-vip config ...
	I1120 21:48:01.333685  893814 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:48:01.347658  893814 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:48:01.347774  893814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:48:01.347874  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:48:01.355891  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:48:01.355970  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:48:01.364043  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:48:01.379594  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:48:01.393213  893814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:48:01.408709  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:48:01.412906  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:01.423617  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:01.551671  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:01.569302  893814 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:48:01.569783  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:01.575430  893814 out.go:179] * Verifying Kubernetes components...
	I1120 21:48:01.578446  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:01.722511  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:01.736860  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:48:01.736934  893814 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:48:01.737186  893814 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:48:04.960847  893814 node_ready.go:49] node "ha-409851-m02" is "Ready"
	I1120 21:48:04.960925  893814 node_ready.go:38] duration metric: took 3.223709398s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:48:04.960953  893814 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:48:04.961033  893814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:48:05.021304  893814 api_server.go:72] duration metric: took 3.451906522s to wait for apiserver process to appear ...
	I1120 21:48:05.021328  893814 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:48:05.021347  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:05.086025  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:48:05.086102  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:48:05.521475  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:05.533319  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:05.533405  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:06.022053  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:06.033112  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:06.033164  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:06.521455  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:06.532108  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:06.532149  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:07.021472  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:07.033567  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:07.033607  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:07.522248  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:07.530734  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:07.530766  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:08.021549  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:08.030067  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:08.030107  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:08.521458  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:08.536690  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:08.536723  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:09.022442  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:09.030694  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:09.030720  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:09.522023  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:09.532358  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:09.532394  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:10.022104  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:10.033572  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:10.033669  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:10.521893  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:10.530183  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:10.530209  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:11.022029  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:11.030471  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:11.030511  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:11.522184  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:11.530808  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:11.530915  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:12.021498  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:12.034571  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:48:12.037300  893814 api_server.go:141] control plane version: v1.34.1
	I1120 21:48:12.037383  893814 api_server.go:131] duration metric: took 7.016046235s to wait for apiserver health ...
	I1120 21:48:12.037406  893814 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:48:12.048906  893814 system_pods.go:59] 26 kube-system pods found
	I1120 21:48:12.049004  893814 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.049030  893814 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.049050  893814 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:48:12.049082  893814 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:48:12.049108  893814 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:48:12.049158  893814 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:48:12.049176  893814 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:48:12.049198  893814 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:48:12.049233  893814 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:48:12.049257  893814 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:48:12.049279  893814 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:48:12.049316  893814 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:48:12.049340  893814 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.049361  893814 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:48:12.049397  893814 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.049417  893814 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:48:12.049437  893814 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:48:12.049467  893814 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:48:12.049494  893814 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:48:12.049514  893814 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:48:12.049534  893814 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:48:12.049569  893814 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:48:12.049590  893814 system_pods.go:61] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:48:12.049611  893814 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:48:12.049637  893814 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:48:12.049658  893814 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:48:12.049682  893814 system_pods.go:74] duration metric: took 12.253231ms to wait for pod list to return data ...
	I1120 21:48:12.049715  893814 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:48:12.054143  893814 default_sa.go:45] found service account: "default"
	I1120 21:48:12.054233  893814 default_sa.go:55] duration metric: took 4.491625ms for default service account to be created ...
	I1120 21:48:12.054260  893814 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:48:12.060879  893814 system_pods.go:86] 26 kube-system pods found
	I1120 21:48:12.060981  893814 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.061047  893814 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.061081  893814 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:48:12.061118  893814 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:48:12.061152  893814 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:48:12.061181  893814 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:48:12.061223  893814 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:48:12.061271  893814 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:48:12.061294  893814 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:48:12.061323  893814 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:48:12.061400  893814 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:48:12.061442  893814 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:48:12.061465  893814 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.061496  893814 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:48:12.061529  893814 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.061551  893814 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:48:12.061574  893814 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:48:12.061605  893814 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:48:12.061634  893814 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:48:12.061656  893814 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:48:12.061691  893814 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:48:12.061711  893814 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:48:12.061741  893814 system_pods.go:89] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:48:12.061774  893814 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:48:12.061808  893814 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:48:12.061865  893814 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:48:12.061888  893814 system_pods.go:126] duration metric: took 7.607421ms to wait for k8s-apps to be running ...
	I1120 21:48:12.061910  893814 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:48:12.062033  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:48:12.076739  893814 system_svc.go:56] duration metric: took 14.81844ms WaitForService to wait for kubelet
	I1120 21:48:12.076837  893814 kubeadm.go:587] duration metric: took 10.507445578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:48:12.076873  893814 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:48:12.086832  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.086926  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.086951  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.086971  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.087052  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.087072  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.087105  893814 node_conditions.go:105] duration metric: took 10.20235ms to run NodePressure ...
	I1120 21:48:12.087136  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:48:12.087208  893814 start.go:256] writing updated cluster config ...
	I1120 21:48:12.090921  893814 out.go:203] 
	I1120 21:48:12.094218  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:12.094393  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.097669  893814 out.go:179] * Starting "ha-409851-m04" worker node in "ha-409851" cluster
	I1120 21:48:12.101322  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:48:12.106565  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:48:12.109717  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:48:12.109827  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:48:12.109799  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:48:12.110177  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:48:12.110212  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:48:12.110403  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.132566  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:48:12.132590  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:48:12.132610  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:48:12.132636  893814 start.go:360] acquireMachinesLock for ha-409851-m04: {Name:mk87280fc97adfe0461a2851d285457d7b179a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:48:12.132693  893814 start.go:364] duration metric: took 36.636µs to acquireMachinesLock for "ha-409851-m04"
	I1120 21:48:12.132719  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:48:12.132728  893814 fix.go:54] fixHost starting: m04
	I1120 21:48:12.132989  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:48:12.154532  893814 fix.go:112] recreateIfNeeded on ha-409851-m04: state=Stopped err=<nil>
	W1120 21:48:12.154570  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:48:12.157790  893814 out.go:252] * Restarting existing docker container for "ha-409851-m04" ...
	I1120 21:48:12.157940  893814 cli_runner.go:164] Run: docker start ha-409851-m04
	I1120 21:48:12.427421  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:48:12.449849  893814 kic.go:430] container "ha-409851-m04" state is running.
	I1120 21:48:12.450339  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:12.476563  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.476804  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:48:12.476866  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:12.503516  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:12.503831  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:12.503851  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:48:12.506827  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:48:15.671577  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:48:15.671648  893814 ubuntu.go:182] provisioning hostname "ha-409851-m04"
	I1120 21:48:15.671727  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:15.694098  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:15.694405  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:15.694422  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m04 && echo "ha-409851-m04" | sudo tee /etc/hostname
	I1120 21:48:15.858000  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:48:15.858085  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:15.876926  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:15.877279  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:15.877303  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:48:16.029401  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:48:16.029428  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:48:16.029445  893814 ubuntu.go:190] setting up certificates
	I1120 21:48:16.029456  893814 provision.go:84] configureAuth start
	I1120 21:48:16.029533  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:16.048090  893814 provision.go:143] copyHostCerts
	I1120 21:48:16.048141  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:48:16.048175  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:48:16.048187  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:48:16.048261  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:48:16.048383  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:48:16.048401  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:48:16.048406  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:48:16.048432  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:48:16.048499  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:48:16.048515  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:48:16.048520  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:48:16.048545  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:48:16.048600  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m04 san=[127.0.0.1 192.168.49.5 ha-409851-m04 localhost minikube]
	I1120 21:48:16.265083  893814 provision.go:177] copyRemoteCerts
	I1120 21:48:16.265160  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:48:16.265209  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.290442  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:16.396414  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:48:16.396484  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:48:16.418369  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:48:16.418439  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:48:16.437910  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:48:16.437992  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:48:16.456712  893814 provision.go:87] duration metric: took 427.242108ms to configureAuth
	I1120 21:48:16.456739  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:48:16.457027  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:16.457179  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.476563  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:16.477370  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:16.477578  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:48:16.833311  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:48:16.833334  893814 machine.go:97] duration metric: took 4.356521136s to provisionDockerMachine
	I1120 21:48:16.833346  893814 start.go:293] postStartSetup for "ha-409851-m04" (driver="docker")
	I1120 21:48:16.833356  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:48:16.833422  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:48:16.833480  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.855465  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:16.967534  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:48:16.970900  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:48:16.970931  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:48:16.970942  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:48:16.971037  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:48:16.971121  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:48:16.971132  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:48:16.971248  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:48:16.980647  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:17.001479  893814 start.go:296] duration metric: took 168.114968ms for postStartSetup
	I1120 21:48:17.001571  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:48:17.001627  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.030384  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.140073  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:48:17.144863  893814 fix.go:56] duration metric: took 5.012127885s for fixHost
	I1120 21:48:17.144890  893814 start.go:83] releasing machines lock for "ha-409851-m04", held for 5.012183123s
	I1120 21:48:17.144964  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:17.172547  893814 out.go:179] * Found network options:
	I1120 21:48:17.175556  893814 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 21:48:17.178404  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178431  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178457  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178669  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:48:17.178737  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:48:17.178785  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.178630  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:48:17.178897  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.197245  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.203292  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.340122  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:48:17.405989  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:48:17.406071  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:48:17.414439  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:48:17.414465  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:48:17.414498  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:48:17.414553  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:48:17.430500  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:48:17.443843  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:48:17.443906  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:48:17.460231  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:48:17.475600  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:48:17.602698  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:48:17.729597  893814 docker.go:234] disabling docker service ...
	I1120 21:48:17.729663  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:48:17.746588  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:48:17.760617  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:48:17.897973  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:48:18.030520  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:48:18.046315  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:48:18.066053  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:48:18.066129  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.077050  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:48:18.077175  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.090079  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.100829  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.110671  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:48:18.121922  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.135640  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.145103  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.155094  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:48:18.164129  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:48:18.171842  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:18.297944  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:48:18.470275  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:48:18.470358  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:48:18.479108  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:48:18.479175  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:48:18.483098  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:48:18.507764  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:48:18.507924  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:18.539112  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:18.574786  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:48:18.577738  893814 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:48:18.580677  893814 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:48:18.583863  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:48:18.602824  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:48:18.606736  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:18.616366  893814 mustload.go:66] Loading cluster: ha-409851
	I1120 21:48:18.616605  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:18.616854  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:48:18.635714  893814 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:48:18.635989  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.5
	I1120 21:48:18.636005  893814 certs.go:195] generating shared ca certs ...
	I1120 21:48:18.636021  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:48:18.636154  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:48:18.636201  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:48:18.636216  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:48:18.636245  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:48:18.636262  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:48:18.636274  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:48:18.636332  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:48:18.636367  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:48:18.636380  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:48:18.636406  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:48:18.636432  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:48:18.636458  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:48:18.636503  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:18.636535  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.636553  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.636564  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.636585  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:48:18.657556  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:48:18.675080  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:48:18.694571  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:48:18.716226  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:48:18.739895  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:48:18.768046  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:48:18.787993  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:48:18.794810  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.802541  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:48:18.810498  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.814300  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.814368  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.856630  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:48:18.864919  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.872737  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:48:18.880590  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.884848  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.884916  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.931413  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:48:18.939099  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.946583  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:48:18.954298  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.960087  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.960197  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:48:19.002435  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:48:19.012167  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:48:19.016432  893814 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:48:19.016483  893814 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 21:48:19.016573  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:48:19.016654  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:48:19.026160  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:48:19.026286  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 21:48:19.036127  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:48:19.049708  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:48:19.064947  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:48:19.068918  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:19.079069  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:19.199728  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:19.213792  893814 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 21:48:19.214167  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:19.219019  893814 out.go:179] * Verifying Kubernetes components...
	I1120 21:48:19.221920  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:19.355490  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:19.371278  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:48:19.371349  893814 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:48:19.371586  893814 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m04" to be "Ready" ...
	I1120 21:48:19.374629  893814 node_ready.go:49] node "ha-409851-m04" is "Ready"
	I1120 21:48:19.374657  893814 node_ready.go:38] duration metric: took 3.053659ms for node "ha-409851-m04" to be "Ready" ...
	I1120 21:48:19.374671  893814 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:48:19.374745  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:48:19.389451  893814 system_svc.go:56] duration metric: took 14.77112ms WaitForService to wait for kubelet
	I1120 21:48:19.389479  893814 kubeadm.go:587] duration metric: took 175.627603ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:48:19.389497  893814 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:48:19.393426  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393518  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393535  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393542  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393547  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393552  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393557  893814 node_conditions.go:105] duration metric: took 4.054434ms to run NodePressure ...
	I1120 21:48:19.393575  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:48:19.393603  893814 start.go:256] writing updated cluster config ...
	I1120 21:48:19.393953  893814 ssh_runner.go:195] Run: rm -f paused
	I1120 21:48:19.397987  893814 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:48:19.398502  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:48:19.416487  893814 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:48:21.424537  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:23.929996  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:26.423923  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:28.424118  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:30.923501  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:33.423121  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:35.423365  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:37.424719  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:39.923727  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:41.965360  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:44.435238  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:46.923403  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:48.923993  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:51.426397  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:53.924562  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:56.423976  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:58.431436  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:00.922387  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:02.923880  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:04.924121  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:07.423527  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:09.424675  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:11.922381  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:13.922686  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:15.923609  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:17.924006  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:20.423097  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:22.423996  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	I1120 21:49:23.424030  893814 pod_ready.go:94] pod "coredns-66bc5c9577-pjk6c" is "Ready"
	I1120 21:49:23.424063  893814 pod_ready.go:86] duration metric: took 1m4.007542805s for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.424073  893814 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.430119  893814 pod_ready.go:94] pod "coredns-66bc5c9577-vfsp6" is "Ready"
	I1120 21:49:23.430146  893814 pod_ready.go:86] duration metric: took 6.066348ms for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.434497  893814 pod_ready.go:83] waiting for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.442021  893814 pod_ready.go:94] pod "etcd-ha-409851" is "Ready"
	I1120 21:49:23.442059  893814 pod_ready.go:86] duration metric: took 7.532597ms for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.442070  893814 pod_ready.go:83] waiting for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.453471  893814 pod_ready.go:94] pod "etcd-ha-409851-m02" is "Ready"
	I1120 21:49:23.453510  893814 pod_ready.go:86] duration metric: took 11.432528ms for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.460522  893814 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.617970  893814 request.go:683] "Waited before sending request" delay="157.293328ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-409851"
	I1120 21:49:23.817544  893814 request.go:683] "Waited before sending request" delay="194.243021ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:23.820786  893814 pod_ready.go:94] pod "kube-apiserver-ha-409851" is "Ready"
	I1120 21:49:23.820814  893814 pod_ready.go:86] duration metric: took 360.266065ms for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.820823  893814 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.018232  893814 request.go:683] "Waited before sending request" delay="197.334029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-409851-m02"
	I1120 21:49:24.217808  893814 request.go:683] "Waited before sending request" delay="195.31208ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:24.220981  893814 pod_ready.go:94] pod "kube-apiserver-ha-409851-m02" is "Ready"
	I1120 21:49:24.221009  893814 pod_ready.go:86] duration metric: took 400.178739ms for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.418386  893814 request.go:683] "Waited before sending request" delay="197.22929ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 21:49:24.423065  893814 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.617542  893814 request.go:683] "Waited before sending request" delay="194.266332ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851"
	I1120 21:49:24.818451  893814 request.go:683] "Waited before sending request" delay="195.369435ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:24.821748  893814 pod_ready.go:94] pod "kube-controller-manager-ha-409851" is "Ready"
	I1120 21:49:24.821777  893814 pod_ready.go:86] duration metric: took 398.632324ms for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.821787  893814 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.018152  893814 request.go:683] "Waited before sending request" delay="196.257511ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m02"
	I1120 21:49:25.217440  893814 request.go:683] "Waited before sending request" delay="193.274434ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:25.221099  893814 pod_ready.go:94] pod "kube-controller-manager-ha-409851-m02" is "Ready"
	I1120 21:49:25.221184  893814 pod_ready.go:86] duration metric: took 399.388707ms for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.417592  893814 request.go:683] "Waited before sending request" delay="196.294697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1120 21:49:25.421901  893814 pod_ready.go:83] waiting for pod "kube-proxy-4qqxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.618261  893814 request.go:683] "Waited before sending request" delay="196.198417ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qqxh"
	I1120 21:49:25.818227  893814 request.go:683] "Waited before sending request" delay="195.266861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:25.822845  893814 pod_ready.go:94] pod "kube-proxy-4qqxh" is "Ready"
	I1120 21:49:25.822876  893814 pod_ready.go:86] duration metric: took 400.891774ms for pod "kube-proxy-4qqxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.822887  893814 pod_ready.go:83] waiting for pod "kube-proxy-pz7vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.018147  893814 request.go:683] "Waited before sending request" delay="195.181839ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz7vt"
	I1120 21:49:26.218218  893814 request.go:683] "Waited before sending request" delay="194.325204ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:26.221718  893814 pod_ready.go:94] pod "kube-proxy-pz7vt" is "Ready"
	I1120 21:49:26.221756  893814 pod_ready.go:86] duration metric: took 398.861103ms for pod "kube-proxy-pz7vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.221767  893814 pod_ready.go:83] waiting for pod "kube-proxy-xnhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.418209  893814 request.go:683] "Waited before sending request" delay="196.333755ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhl6"
	I1120 21:49:26.618151  893814 request.go:683] "Waited before sending request" delay="196.349344ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m04"
	I1120 21:49:26.623181  893814 pod_ready.go:94] pod "kube-proxy-xnhl6" is "Ready"
	I1120 21:49:26.623210  893814 pod_ready.go:86] duration metric: took 401.436889ms for pod "kube-proxy-xnhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.817459  893814 request.go:683] "Waited before sending request" delay="194.131676ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1120 21:49:26.821013  893814 pod_ready.go:83] waiting for pod "kube-scheduler-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.018492  893814 request.go:683] "Waited before sending request" delay="197.322386ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851"
	I1120 21:49:27.217513  893814 request.go:683] "Waited before sending request" delay="190.181719ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:27.226443  893814 pod_ready.go:94] pod "kube-scheduler-ha-409851" is "Ready"
	I1120 21:49:27.226520  893814 pod_ready.go:86] duration metric: took 405.47524ms for pod "kube-scheduler-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.226546  893814 pod_ready.go:83] waiting for pod "kube-scheduler-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.417983  893814 request.go:683] "Waited before sending request" delay="191.325659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851-m02"
	I1120 21:49:27.618140  893814 request.go:683] "Waited before sending request" delay="196.249535ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:27.817620  893814 request.go:683] "Waited before sending request" delay="90.393989ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851-m02"
	I1120 21:49:28.018196  893814 request.go:683] "Waited before sending request" delay="197.189707ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:28.417767  893814 request.go:683] "Waited before sending request" delay="186.33455ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:28.817959  893814 request.go:683] "Waited before sending request" delay="87.275796ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	W1120 21:49:29.233343  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:31.233779  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:33.234413  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:35.733284  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:38.233049  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:40.233361  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:42.235442  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:44.734815  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:47.232729  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:49.233113  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:51.234068  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:53.732962  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:56.233319  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:58.734472  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:01.234009  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:03.234832  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:05.733469  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:08.234179  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:10.735546  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:12.735872  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:14.736374  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:16.740445  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:19.233806  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:21.733741  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:23.735456  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:26.232453  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:28.233317  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:30.735024  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:32.735868  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:35.234232  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:37.734207  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:40.234052  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:42.240134  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:44.733059  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:46.733334  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:48.738389  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:51.233067  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:53.234660  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:55.733852  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:57.734484  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:00.249903  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:02.732606  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:04.736105  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:07.233350  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:09.733211  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:11.733392  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:14.234536  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:16.732259  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:18.735892  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:20.735996  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:23.234680  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:25.733375  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:27.733961  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:29.735523  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:32.236382  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:34.733336  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:36.733744  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:38.734442  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:40.734588  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:42.734796  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:44.735137  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:46.736111  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:49.233632  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:51.733070  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:53.734822  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:56.233800  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:58.234379  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:00.264529  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:02.742360  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:05.233819  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:07.733077  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:09.734867  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:12.233625  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:14.733387  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:16.734342  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:18.734797  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	I1120 21:52:19.398473  893814 pod_ready.go:86] duration metric: took 2m52.171896252s for pod "kube-scheduler-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:52:19.398508  893814 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 21:52:19.398524  893814 pod_ready.go:40] duration metric: took 4m0.000499103s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:52:19.401528  893814 out.go:203] 
	W1120 21:52:19.404511  893814 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 21:52:19.407414  893814 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-409851 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-409851
helpers_test.go:243: (dbg) docker inspect ha-409851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	        "Created": "2025-11-20T21:32:05.722530265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 893938,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:46:13.072458678Z",
	            "FinishedAt": "2025-11-20T21:46:12.348513553Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hostname",
	        "HostsPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hosts",
	        "LogPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853-json.log",
	        "Name": "/ha-409851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-409851:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-409851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	                "LowerDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-409851",
	                "Source": "/var/lib/docker/volumes/ha-409851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-409851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-409851",
	                "name.minikube.sigs.k8s.io": "ha-409851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc18c8f3af5088b5bb1d9ce24d0b962e6479dd84027377689edccf3f48baefb2",
	            "SandboxKey": "/var/run/docker/netns/cc18c8f3af50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33940"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-409851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:23:29:98:04:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad232b357b1bc65babf7a48f3581b00686ef0ccc0f86acee1a57f8a071f682f1",
	                    "EndpointID": "42281e0852c3f6fd3ef3ee7cb17a8b94df54edc9c35c3a29e94bd1eb0ceadb4a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-409851",
	                        "d20916d298c9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-409851 -n ha-409851
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 logs -n 25: (1.476031379s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-409851 cp ha-409851-m03:/home/docker/cp-test.txt ha-409851-m04:/home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ cp      │ ha-409851 cp testdata/cp-test.txt ha-409851-m04:/home/docker/cp-test.txt                                                            │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668750254/001/cp-test_ha-409851-m04.txt │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851:/home/docker/cp-test_ha-409851-m04_ha-409851.txt                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851.txt                                                │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m02:/home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m02 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m03:/home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node start m02 --alsologtostderr -v 5                                                                                     │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │                     │
	│ stop    │ ha-409851 stop --alsologtostderr -v 5                                                                                               │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:38 UTC │
	│ start   │ ha-409851 start --wait true --alsologtostderr -v 5                                                                                  │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:38 UTC │                     │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │                     │
	│ node    │ ha-409851 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │ 20 Nov 25 21:45 UTC │
	│ stop    │ ha-409851 stop --alsologtostderr -v 5                                                                                               │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │ 20 Nov 25 21:46 UTC │
	│ start   │ ha-409851 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:46:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:46:12.791438  893814 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:46:12.791547  893814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.791556  893814 out.go:374] Setting ErrFile to fd 2...
	I1120 21:46:12.791561  893814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.791812  893814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:46:12.792153  893814 out.go:368] Setting JSON to false
	I1120 21:46:12.792975  893814 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16098,"bootTime":1763659075,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:46:12.793039  893814 start.go:143] virtualization:  
	I1120 21:46:12.796567  893814 out.go:179] * [ha-409851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:46:12.800274  893814 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:46:12.800333  893814 notify.go:221] Checking for updates...
	I1120 21:46:12.805930  893814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:46:12.808740  893814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:12.811665  893814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:46:12.814590  893814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:46:12.817489  893814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:46:12.820869  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:12.821456  893814 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:46:12.854504  893814 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:46:12.854629  893814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:46:12.916245  893814 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:46:12.907017867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:46:12.916354  893814 docker.go:319] overlay module found
	I1120 21:46:12.921281  893814 out.go:179] * Using the docker driver based on existing profile
	I1120 21:46:12.924086  893814 start.go:309] selected driver: docker
	I1120 21:46:12.924103  893814 start.go:930] validating driver "docker" against &{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:12.924235  893814 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:46:12.924335  893814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:46:12.982109  893814 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:46:12.972838498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:46:12.982542  893814 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:46:12.982605  893814 cni.go:84] Creating CNI manager for ""
	I1120 21:46:12.982654  893814 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1120 21:46:12.982705  893814 start.go:353] cluster config:
	{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:12.987881  893814 out.go:179] * Starting "ha-409851" primary control-plane node in "ha-409851" cluster
	I1120 21:46:12.990803  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:46:12.993745  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:46:12.996606  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:12.996692  893814 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:46:12.996690  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:46:12.996704  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:46:12.996891  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:46:12.996899  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:46:12.997043  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:13.017636  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:46:13.017661  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:46:13.017680  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:46:13.017708  893814 start.go:360] acquireMachinesLock for ha-409851: {Name:mk8d4d263fd846febb903e54335147f9d639d302 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:46:13.017784  893814 start.go:364] duration metric: took 50.068µs to acquireMachinesLock for "ha-409851"
	I1120 21:46:13.017814  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:46:13.017825  893814 fix.go:54] fixHost starting: 
	I1120 21:46:13.018084  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:46:13.035594  893814 fix.go:112] recreateIfNeeded on ha-409851: state=Stopped err=<nil>
	W1120 21:46:13.035627  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:46:13.038907  893814 out.go:252] * Restarting existing docker container for "ha-409851" ...
	I1120 21:46:13.039022  893814 cli_runner.go:164] Run: docker start ha-409851
	I1120 21:46:13.304460  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:46:13.328120  893814 kic.go:430] container "ha-409851" state is running.
	I1120 21:46:13.328719  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:13.354344  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:13.354582  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:46:13.354651  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:13.379550  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:13.379870  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:13.379890  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:46:13.380728  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:46:16.522806  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:46:16.522896  893814 ubuntu.go:182] provisioning hostname "ha-409851"
	I1120 21:46:16.523007  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.540197  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:16.540514  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:16.540535  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851 && echo "ha-409851" | sudo tee /etc/hostname
	I1120 21:46:16.694351  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:46:16.694434  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.711779  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:16.712102  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:16.712124  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:46:16.851168  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:46:16.851196  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:46:16.851221  893814 ubuntu.go:190] setting up certificates
	I1120 21:46:16.851230  893814 provision.go:84] configureAuth start
	I1120 21:46:16.851299  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:16.868945  893814 provision.go:143] copyHostCerts
	I1120 21:46:16.868995  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:16.869035  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:46:16.869055  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:16.869140  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:46:16.869236  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:16.869258  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:46:16.869266  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:16.869304  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:46:16.869353  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:16.869373  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:46:16.869384  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:16.869416  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:46:16.869469  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851 san=[127.0.0.1 192.168.49.2 ha-409851 localhost minikube]
	I1120 21:46:16.952356  893814 provision.go:177] copyRemoteCerts
	I1120 21:46:16.952425  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:46:16.952478  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.973308  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.074564  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:46:17.074634  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1120 21:46:17.091858  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:46:17.091917  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:46:17.109606  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:46:17.109674  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:46:17.127878  893814 provision.go:87] duration metric: took 276.622438ms to configureAuth
	I1120 21:46:17.127903  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:46:17.128138  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:17.128246  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.145230  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:17.145555  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:17.145568  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:46:17.521503  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:46:17.521523  893814 machine.go:97] duration metric: took 4.166931199s to provisionDockerMachine
	I1120 21:46:17.521535  893814 start.go:293] postStartSetup for "ha-409851" (driver="docker")
	I1120 21:46:17.521545  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:46:17.521607  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:46:17.521648  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.543040  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.642924  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:46:17.646266  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:46:17.646295  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:46:17.646306  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:46:17.646362  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:46:17.646441  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:46:17.646453  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:46:17.646557  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:46:17.654029  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:17.671759  893814 start.go:296] duration metric: took 150.208491ms for postStartSetup
	I1120 21:46:17.671861  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:46:17.671903  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.688970  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.788149  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:46:17.792950  893814 fix.go:56] duration metric: took 4.775117155s for fixHost
	I1120 21:46:17.792985  893814 start.go:83] releasing machines lock for "ha-409851", held for 4.775188491s
	I1120 21:46:17.793094  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:17.811172  893814 ssh_runner.go:195] Run: cat /version.json
	I1120 21:46:17.811227  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.811496  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:46:17.811569  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.830577  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.847514  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:18.032855  893814 ssh_runner.go:195] Run: systemctl --version
	I1120 21:46:18.039676  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:46:18.084631  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:46:18.089315  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:46:18.089397  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:46:18.097880  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:46:18.097906  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:46:18.097957  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:46:18.098046  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:46:18.113581  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:46:18.127110  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:46:18.127198  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:46:18.143327  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:46:18.156859  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:46:18.285846  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:46:18.406177  893814 docker.go:234] disabling docker service ...
	I1120 21:46:18.406303  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:46:18.422621  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:46:18.436488  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:46:18.557150  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:46:18.669376  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:46:18.683020  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:46:18.696701  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:46:18.696805  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.705450  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:46:18.705544  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.714727  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.724078  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.733001  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:46:18.741246  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.750057  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.758559  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.767154  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:46:18.774675  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:46:18.782542  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:18.908183  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:46:19.102647  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:46:19.102768  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:46:19.107633  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:46:19.107713  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:46:19.112020  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:46:19.139825  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:46:19.139929  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:46:19.171276  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:46:19.211415  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:46:19.214291  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:46:19.231738  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:46:19.235755  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:46:19.246147  893814 kubeadm.go:884] updating cluster {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:46:19.246304  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:19.246367  893814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:46:19.290538  893814 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:46:19.290565  893814 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:46:19.290626  893814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:46:19.316155  893814 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:46:19.316180  893814 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:46:19.316189  893814 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 21:46:19.316303  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:46:19.316387  893814 ssh_runner.go:195] Run: crio config
	I1120 21:46:19.371279  893814 cni.go:84] Creating CNI manager for ""
	I1120 21:46:19.371300  893814 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1120 21:46:19.371316  893814 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:46:19.371339  893814 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-409851 NodeName:ha-409851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:46:19.371462  893814 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-409851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:46:19.371484  893814 kube-vip.go:115] generating kube-vip config ...
	I1120 21:46:19.371537  893814 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:46:19.384116  893814 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:46:19.384238  893814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:46:19.384326  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:46:19.392356  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:46:19.392430  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 21:46:19.400069  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 21:46:19.413705  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:46:19.427554  893814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1120 21:46:19.440926  893814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:46:19.454200  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:46:19.457772  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:46:19.467840  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:19.582412  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:46:19.599710  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.2
	I1120 21:46:19.599791  893814 certs.go:195] generating shared ca certs ...
	I1120 21:46:19.599822  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.599996  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:46:19.600074  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:46:19.600106  893814 certs.go:257] generating profile certs ...
	I1120 21:46:19.600223  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:46:19.600276  893814 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee
	I1120 21:46:19.600310  893814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1120 21:46:19.750831  893814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee ...
	I1120 21:46:19.750905  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee: {Name:mk539a3dda8a36b48c6c5c30b7491f9043b065a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.751146  893814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee ...
	I1120 21:46:19.751277  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee: {Name:mk851c2f98f193e8bb483e43db8a657c69eae8b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.751416  893814 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt
	I1120 21:46:19.751615  893814 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key
	I1120 21:46:19.751796  893814 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:46:19.751838  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:46:19.751886  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:46:19.751918  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:46:19.751961  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:46:19.751995  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:46:19.752027  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:46:19.752070  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:46:19.752104  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:46:19.752174  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:46:19.752242  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:46:19.752268  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:46:19.752317  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:46:19.752367  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:46:19.752427  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:46:19.752538  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:19.752606  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:46:19.752639  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:46:19.752686  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:19.753263  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:46:19.782536  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:46:19.807080  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:46:19.842006  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:46:19.863690  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:46:19.882351  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:46:19.902131  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:46:19.923247  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:46:19.943308  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:46:19.961281  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:46:19.981823  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:46:19.999815  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:46:20.019398  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:46:20.026511  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.035530  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:46:20.043827  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.048146  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.048252  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.090685  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:46:20.099210  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.107103  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:46:20.115263  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.119310  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.119405  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.160958  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:46:20.168922  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.176806  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:46:20.184554  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.188641  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.188742  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.232577  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:46:20.246815  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:46:20.252000  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:46:20.307993  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:46:20.361067  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:46:20.404267  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:46:20.471141  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:46:20.556774  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:46:20.620581  893814 kubeadm.go:401] StartCluster: {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:20.620772  893814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:46:20.620872  893814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:46:20.672595  893814 cri.go:89] found id: "e758e4601a79aacd9dd015c82692281d156d9100d6bc2fb480b11d07ff223294"
	I1120 21:46:20.672675  893814 cri.go:89] found id: "bf7fd293f188a4c3116512ca8739e3ae57f6b6ac6e8e5e7a7e493804caba0ede"
	I1120 21:46:20.672702  893814 cri.go:89] found id: "29879cb03dd0a43326e4e6e94a9bec4cf49f8356cb3cf208c0a562ed783bb2de"
	I1120 21:46:20.672723  893814 cri.go:89] found id: "d2a9e01261d927422239ac6d8aae4c4810c85777bd6fc37ddc5126a51deff4dd"
	I1120 21:46:20.672755  893814 cri.go:89] found id: "538778f2e99f0831684f744a21c231b476e72c223d7af53829698631c58b4b38"
	I1120 21:46:20.672779  893814 cri.go:89] found id: ""
	I1120 21:46:20.672864  893814 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:46:20.692788  893814 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:46:20Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:46:20.692935  893814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:46:20.704191  893814 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:46:20.704251  893814 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:46:20.704341  893814 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:46:20.715485  893814 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:46:20.716011  893814 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-409851" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:20.716179  893814 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "ha-409851" cluster setting kubeconfig missing "ha-409851" context setting]
	I1120 21:46:20.716543  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.717160  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:46:20.717985  893814 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 21:46:20.718059  893814 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 21:46:20.718131  893814 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 21:46:20.718157  893814 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 21:46:20.718177  893814 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 21:46:20.718212  893814 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 21:46:20.730102  893814 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:46:20.744141  893814 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 21:46:20.744165  893814 kubeadm.go:602] duration metric: took 39.885836ms to restartPrimaryControlPlane
	I1120 21:46:20.744174  893814 kubeadm.go:403] duration metric: took 123.603025ms to StartCluster
	I1120 21:46:20.744191  893814 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.744256  893814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:20.744888  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.745066  893814 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:46:20.745084  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:46:20.745100  893814 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:46:20.745725  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:20.751118  893814 out.go:179] * Enabled addons: 
	I1120 21:46:20.754039  893814 addons.go:515] duration metric: took 8.930638ms for enable addons: enabled=[]
	I1120 21:46:20.754080  893814 start.go:247] waiting for cluster config update ...
	I1120 21:46:20.754090  893814 start.go:256] writing updated cluster config ...
	I1120 21:46:20.757337  893814 out.go:203] 
	I1120 21:46:20.760537  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:20.760717  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:20.764214  893814 out.go:179] * Starting "ha-409851-m02" control-plane node in "ha-409851" cluster
	I1120 21:46:20.767355  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:46:20.770446  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:46:20.773470  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:20.773563  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:46:20.773537  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:46:20.773902  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:46:20.773939  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:46:20.774117  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:20.801641  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:46:20.801660  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:46:20.801671  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:46:20.801698  893814 start.go:360] acquireMachinesLock for ha-409851-m02: {Name:mka809540f7c511f76e83dac3b1218011243fbec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:46:20.801748  893814 start.go:364] duration metric: took 35.446µs to acquireMachinesLock for "ha-409851-m02"
	I1120 21:46:20.801767  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:46:20.801774  893814 fix.go:54] fixHost starting: m02
	I1120 21:46:20.802025  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:46:20.830914  893814 fix.go:112] recreateIfNeeded on ha-409851-m02: state=Stopped err=<nil>
	W1120 21:46:20.830963  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:46:20.835462  893814 out.go:252] * Restarting existing docker container for "ha-409851-m02" ...
	I1120 21:46:20.835556  893814 cli_runner.go:164] Run: docker start ha-409851-m02
	I1120 21:46:21.218686  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:46:21.252602  893814 kic.go:430] container "ha-409851-m02" state is running.
	I1120 21:46:21.252990  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:21.287738  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:21.288165  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:46:21.288242  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:21.321625  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:21.321986  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:21.322003  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:46:21.324132  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50986->127.0.0.1:33942: read: connection reset by peer
	I1120 21:46:24.541429  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:46:24.541464  893814 ubuntu.go:182] provisioning hostname "ha-409851-m02"
	I1120 21:46:24.541536  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:24.591123  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:24.591436  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:24.591454  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m02 && echo "ha-409851-m02" | sudo tee /etc/hostname
	I1120 21:46:24.829670  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:46:24.830508  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:24.868680  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:24.868993  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:24.869016  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:46:25.086415  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:46:25.086446  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:46:25.086467  893814 ubuntu.go:190] setting up certificates
	I1120 21:46:25.086477  893814 provision.go:84] configureAuth start
	I1120 21:46:25.086545  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:25.116440  893814 provision.go:143] copyHostCerts
	I1120 21:46:25.116492  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:25.116528  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:46:25.116540  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:25.116614  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:46:25.116704  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:25.116727  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:46:25.116737  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:25.116766  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:46:25.116814  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:25.116842  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:46:25.116852  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:25.116880  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:46:25.116934  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m02 san=[127.0.0.1 192.168.49.3 ha-409851-m02 localhost minikube]
	I1120 21:46:25.299085  893814 provision.go:177] copyRemoteCerts
	I1120 21:46:25.299152  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:46:25.299205  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:25.334304  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:25.454142  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:46:25.454207  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:46:25.519452  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:46:25.519523  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:46:25.579807  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:46:25.579872  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:46:25.625625  893814 provision.go:87] duration metric: took 539.133654ms to configureAuth
	I1120 21:46:25.625654  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:46:25.625881  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:25.626005  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:25.676739  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:25.677055  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:25.677078  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:46:27.313592  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:46:27.313611  893814 machine.go:97] duration metric: took 6.025425517s to provisionDockerMachine
	I1120 21:46:27.313622  893814 start.go:293] postStartSetup for "ha-409851-m02" (driver="docker")
	I1120 21:46:27.313633  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:46:27.313709  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:46:27.313760  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.348890  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.472301  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:46:27.476588  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:46:27.476614  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:46:27.476626  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:46:27.476683  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:46:27.476757  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:46:27.476765  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:46:27.476876  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:46:27.485018  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:27.504498  893814 start.go:296] duration metric: took 190.860481ms for postStartSetup
	I1120 21:46:27.504660  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:46:27.504741  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.528788  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.644723  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:46:27.649843  893814 fix.go:56] duration metric: took 6.84806345s for fixHost
	I1120 21:46:27.649868  893814 start.go:83] releasing machines lock for "ha-409851-m02", held for 6.848112263s
	I1120 21:46:27.649945  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:27.674188  893814 out.go:179] * Found network options:
	I1120 21:46:27.677242  893814 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 21:46:27.680124  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:46:27.680168  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:46:27.680244  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:46:27.680247  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:46:27.680288  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.680307  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.700610  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.707137  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.925105  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:46:28.059572  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:46:28.059657  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:46:28.074369  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:46:28.074399  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:46:28.074432  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:46:28.074499  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:46:28.097384  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:46:28.115088  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:46:28.115159  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:46:28.145681  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:46:28.169842  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:46:28.395806  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:46:28.633186  893814 docker.go:234] disabling docker service ...
	I1120 21:46:28.633295  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:46:28.653639  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:46:28.673051  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:46:28.911134  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:46:29.139790  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:46:29.165309  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:46:29.189385  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:46:29.189499  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.203577  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:46:29.203723  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.219781  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.229964  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.247451  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:46:29.257774  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.270135  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.279629  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.289968  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:46:29.299527  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:46:29.308385  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:29.625535  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:47:59.900415  893814 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.274799929s)
	I1120 21:47:59.900439  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:47:59.900493  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:47:59.904340  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:47:59.904408  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:47:59.908141  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:47:59.934786  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:47:59.934878  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:47:59.970641  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:00.031101  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:48:00.052822  893814 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:48:00.070551  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:48:00.144325  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:48:00.158851  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:00.193319  893814 mustload.go:66] Loading cluster: ha-409851
	I1120 21:48:00.193638  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:00.193952  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:48:00.257208  893814 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:48:00.257542  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.3
	I1120 21:48:00.257559  893814 certs.go:195] generating shared ca certs ...
	I1120 21:48:00.257575  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:48:00.257700  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:48:00.257744  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:48:00.257751  893814 certs.go:257] generating profile certs ...
	I1120 21:48:00.257839  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:48:00.257904  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.e3c52656
	I1120 21:48:00.257941  893814 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:48:00.257951  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:48:00.257964  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:48:00.257975  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:48:00.257985  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:48:00.257997  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:48:00.258009  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:48:00.258021  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:48:00.258032  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:48:00.258087  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:48:00.258118  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:48:00.258141  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:48:00.258171  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:48:00.258206  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:48:00.258229  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:48:00.258276  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:00.258311  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.258325  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.258342  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:00.258416  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:48:00.286658  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:48:00.411419  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:48:00.416825  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:48:00.429106  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:48:00.434141  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:48:00.446859  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:48:00.451932  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:48:00.463743  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:48:00.468370  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:48:00.478967  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:48:00.483728  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:48:00.495516  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:48:00.499782  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:48:00.510022  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:48:00.533411  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:48:00.557609  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:48:00.579641  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:48:00.599346  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:48:00.622831  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:48:00.643496  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:48:00.662349  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:48:00.681048  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:48:00.700389  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:48:00.721204  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:48:00.741591  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:48:00.755291  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:48:00.769986  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:48:00.784853  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:48:00.798923  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:48:00.812361  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:48:00.826911  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:48:00.842313  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:48:00.849394  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.857032  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:48:00.864532  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.868398  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.868472  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.910592  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:48:00.918458  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.926263  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:48:00.934304  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.938442  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.938531  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.987101  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:48:00.995288  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.003879  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:48:01.012703  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.016823  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.016924  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.059233  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:48:01.068459  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:48:01.072670  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:48:01.115135  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:48:01.157870  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:48:01.200156  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:48:01.244244  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:48:01.286456  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:48:01.333479  893814 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 21:48:01.333592  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:48:01.333632  893814 kube-vip.go:115] generating kube-vip config ...
	I1120 21:48:01.333685  893814 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:48:01.347658  893814 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:48:01.347774  893814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:48:01.347874  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:48:01.355891  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:48:01.355970  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:48:01.364043  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:48:01.379594  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:48:01.393213  893814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:48:01.408709  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:48:01.412906  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:01.423617  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:01.551671  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:01.569302  893814 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:48:01.569783  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:01.575430  893814 out.go:179] * Verifying Kubernetes components...
	I1120 21:48:01.578446  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:01.722511  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:01.736860  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:48:01.736934  893814 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:48:01.737186  893814 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:48:04.960847  893814 node_ready.go:49] node "ha-409851-m02" is "Ready"
	I1120 21:48:04.960925  893814 node_ready.go:38] duration metric: took 3.223709398s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:48:04.960953  893814 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:48:04.961033  893814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:48:05.021304  893814 api_server.go:72] duration metric: took 3.451906522s to wait for apiserver process to appear ...
	I1120 21:48:05.021328  893814 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:48:05.021347  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:05.086025  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:48:05.086102  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:48:05.521475  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:05.533319  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:05.533405  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:06.022053  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:06.033112  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:06.033164  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:06.521455  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:06.532108  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:06.532149  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:07.021472  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:07.033567  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:07.033607  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:07.522248  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:07.530734  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:07.530766  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:08.021549  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:08.030067  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:08.030107  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:08.521458  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:08.536690  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:08.536723  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:09.022442  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:09.030694  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:09.030720  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:09.522023  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:09.532358  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:09.532394  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:10.022104  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:10.033572  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:10.033669  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:10.521893  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:10.530183  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:10.530209  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:11.022029  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:11.030471  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:11.030511  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:11.522184  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:11.530808  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:11.530915  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:12.021498  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:12.034571  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:48:12.037300  893814 api_server.go:141] control plane version: v1.34.1
	I1120 21:48:12.037383  893814 api_server.go:131] duration metric: took 7.016046235s to wait for apiserver health ...
	I1120 21:48:12.037406  893814 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:48:12.048906  893814 system_pods.go:59] 26 kube-system pods found
	I1120 21:48:12.049004  893814 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.049030  893814 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.049050  893814 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:48:12.049082  893814 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:48:12.049108  893814 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:48:12.049158  893814 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:48:12.049176  893814 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:48:12.049198  893814 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:48:12.049233  893814 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:48:12.049257  893814 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:48:12.049279  893814 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:48:12.049316  893814 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:48:12.049340  893814 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.049361  893814 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:48:12.049397  893814 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.049417  893814 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:48:12.049437  893814 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:48:12.049467  893814 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:48:12.049494  893814 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:48:12.049514  893814 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:48:12.049534  893814 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:48:12.049569  893814 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:48:12.049590  893814 system_pods.go:61] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:48:12.049611  893814 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:48:12.049637  893814 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:48:12.049658  893814 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:48:12.049682  893814 system_pods.go:74] duration metric: took 12.253231ms to wait for pod list to return data ...
	I1120 21:48:12.049715  893814 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:48:12.054143  893814 default_sa.go:45] found service account: "default"
	I1120 21:48:12.054233  893814 default_sa.go:55] duration metric: took 4.491625ms for default service account to be created ...
	I1120 21:48:12.054260  893814 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:48:12.060879  893814 system_pods.go:86] 26 kube-system pods found
	I1120 21:48:12.060981  893814 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.061047  893814 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.061081  893814 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:48:12.061118  893814 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:48:12.061152  893814 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:48:12.061181  893814 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:48:12.061223  893814 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:48:12.061271  893814 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:48:12.061294  893814 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:48:12.061323  893814 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:48:12.061400  893814 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:48:12.061442  893814 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:48:12.061465  893814 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.061496  893814 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:48:12.061529  893814 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.061551  893814 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:48:12.061574  893814 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:48:12.061605  893814 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:48:12.061634  893814 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:48:12.061656  893814 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:48:12.061691  893814 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:48:12.061711  893814 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:48:12.061741  893814 system_pods.go:89] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:48:12.061774  893814 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:48:12.061808  893814 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:48:12.061865  893814 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:48:12.061888  893814 system_pods.go:126] duration metric: took 7.607421ms to wait for k8s-apps to be running ...
	I1120 21:48:12.061910  893814 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:48:12.062033  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:48:12.076739  893814 system_svc.go:56] duration metric: took 14.81844ms WaitForService to wait for kubelet
	I1120 21:48:12.076837  893814 kubeadm.go:587] duration metric: took 10.507445578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:48:12.076873  893814 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:48:12.086832  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.086926  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.086951  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.086971  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.087052  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.087072  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.087105  893814 node_conditions.go:105] duration metric: took 10.20235ms to run NodePressure ...
	I1120 21:48:12.087136  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:48:12.087208  893814 start.go:256] writing updated cluster config ...
	I1120 21:48:12.090921  893814 out.go:203] 
	I1120 21:48:12.094218  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:12.094393  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.097669  893814 out.go:179] * Starting "ha-409851-m04" worker node in "ha-409851" cluster
	I1120 21:48:12.101322  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:48:12.106565  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:48:12.109717  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:48:12.109827  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:48:12.109799  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:48:12.110177  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:48:12.110212  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:48:12.110403  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.132566  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:48:12.132590  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:48:12.132610  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:48:12.132636  893814 start.go:360] acquireMachinesLock for ha-409851-m04: {Name:mk87280fc97adfe0461a2851d285457d7b179a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:48:12.132693  893814 start.go:364] duration metric: took 36.636µs to acquireMachinesLock for "ha-409851-m04"
	I1120 21:48:12.132719  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:48:12.132728  893814 fix.go:54] fixHost starting: m04
	I1120 21:48:12.132989  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:48:12.154532  893814 fix.go:112] recreateIfNeeded on ha-409851-m04: state=Stopped err=<nil>
	W1120 21:48:12.154570  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:48:12.157790  893814 out.go:252] * Restarting existing docker container for "ha-409851-m04" ...
	I1120 21:48:12.157940  893814 cli_runner.go:164] Run: docker start ha-409851-m04
	I1120 21:48:12.427421  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:48:12.449849  893814 kic.go:430] container "ha-409851-m04" state is running.
	I1120 21:48:12.450339  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:12.476563  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.476804  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:48:12.476866  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:12.503516  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:12.503831  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:12.503851  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:48:12.506827  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:48:15.671577  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:48:15.671648  893814 ubuntu.go:182] provisioning hostname "ha-409851-m04"
	I1120 21:48:15.671727  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:15.694098  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:15.694405  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:15.694422  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m04 && echo "ha-409851-m04" | sudo tee /etc/hostname
	I1120 21:48:15.858000  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:48:15.858085  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:15.876926  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:15.877279  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:15.877303  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:48:16.029401  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:48:16.029428  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:48:16.029445  893814 ubuntu.go:190] setting up certificates
	I1120 21:48:16.029456  893814 provision.go:84] configureAuth start
	I1120 21:48:16.029533  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:16.048090  893814 provision.go:143] copyHostCerts
	I1120 21:48:16.048141  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:48:16.048175  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:48:16.048187  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:48:16.048261  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:48:16.048383  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:48:16.048401  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:48:16.048406  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:48:16.048432  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:48:16.048499  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:48:16.048515  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:48:16.048520  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:48:16.048545  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:48:16.048600  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m04 san=[127.0.0.1 192.168.49.5 ha-409851-m04 localhost minikube]
	I1120 21:48:16.265083  893814 provision.go:177] copyRemoteCerts
	I1120 21:48:16.265160  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:48:16.265209  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.290442  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:16.396414  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:48:16.396484  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:48:16.418369  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:48:16.418439  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:48:16.437910  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:48:16.437992  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:48:16.456712  893814 provision.go:87] duration metric: took 427.242108ms to configureAuth
	I1120 21:48:16.456739  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:48:16.457027  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:16.457179  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.476563  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:16.477370  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:16.477578  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:48:16.833311  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:48:16.833334  893814 machine.go:97] duration metric: took 4.356521136s to provisionDockerMachine
	I1120 21:48:16.833346  893814 start.go:293] postStartSetup for "ha-409851-m04" (driver="docker")
	I1120 21:48:16.833356  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:48:16.833422  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:48:16.833480  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.855465  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:16.967534  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:48:16.970900  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:48:16.970931  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:48:16.970942  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:48:16.971037  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:48:16.971121  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:48:16.971132  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:48:16.971248  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:48:16.980647  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:17.001479  893814 start.go:296] duration metric: took 168.114968ms for postStartSetup
	I1120 21:48:17.001571  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:48:17.001627  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.030384  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.140073  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:48:17.144863  893814 fix.go:56] duration metric: took 5.012127885s for fixHost
	I1120 21:48:17.144890  893814 start.go:83] releasing machines lock for "ha-409851-m04", held for 5.012183123s
	I1120 21:48:17.144964  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:17.172547  893814 out.go:179] * Found network options:
	I1120 21:48:17.175556  893814 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 21:48:17.178404  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178431  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178457  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178669  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:48:17.178737  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:48:17.178785  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.178630  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:48:17.178897  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.197245  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.203292  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.340122  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:48:17.405989  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:48:17.406071  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:48:17.414439  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:48:17.414465  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:48:17.414498  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:48:17.414553  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:48:17.430500  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:48:17.443843  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:48:17.443906  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:48:17.460231  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:48:17.475600  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:48:17.602698  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:48:17.729597  893814 docker.go:234] disabling docker service ...
	I1120 21:48:17.729663  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:48:17.746588  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:48:17.760617  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:48:17.897973  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:48:18.030520  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:48:18.046315  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:48:18.066053  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:48:18.066129  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.077050  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:48:18.077175  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.090079  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.100829  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.110671  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:48:18.121922  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.135640  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.145103  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.155094  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:48:18.164129  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:48:18.171842  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:18.297944  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:48:18.470275  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:48:18.470358  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:48:18.479108  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:48:18.479175  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:48:18.483098  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:48:18.507764  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:48:18.507924  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:18.539112  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:18.574786  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:48:18.577738  893814 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:48:18.580677  893814 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:48:18.583863  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:48:18.602824  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:48:18.606736  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:18.616366  893814 mustload.go:66] Loading cluster: ha-409851
	I1120 21:48:18.616605  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:18.616854  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:48:18.635714  893814 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:48:18.635989  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.5
	I1120 21:48:18.636005  893814 certs.go:195] generating shared ca certs ...
	I1120 21:48:18.636021  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:48:18.636154  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:48:18.636201  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:48:18.636216  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:48:18.636245  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:48:18.636262  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:48:18.636274  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:48:18.636332  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:48:18.636367  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:48:18.636380  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:48:18.636406  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:48:18.636432  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:48:18.636458  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:48:18.636503  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:18.636535  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.636553  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.636564  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.636585  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:48:18.657556  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:48:18.675080  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:48:18.694571  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:48:18.716226  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:48:18.739895  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:48:18.768046  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:48:18.787993  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:48:18.794810  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.802541  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:48:18.810498  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.814300  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.814368  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.856630  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:48:18.864919  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.872737  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:48:18.880590  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.884848  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.884916  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.931413  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:48:18.939099  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.946583  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:48:18.954298  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.960087  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.960197  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:48:19.002435  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:48:19.012167  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:48:19.016432  893814 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:48:19.016483  893814 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 21:48:19.016573  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:48:19.016654  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:48:19.026160  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:48:19.026286  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 21:48:19.036127  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:48:19.049708  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:48:19.064947  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:48:19.068918  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:19.079069  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:19.199728  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:19.213792  893814 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 21:48:19.214167  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:19.219019  893814 out.go:179] * Verifying Kubernetes components...
	I1120 21:48:19.221920  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:19.355490  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:19.371278  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:48:19.371349  893814 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:48:19.371586  893814 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m04" to be "Ready" ...
	I1120 21:48:19.374629  893814 node_ready.go:49] node "ha-409851-m04" is "Ready"
	I1120 21:48:19.374657  893814 node_ready.go:38] duration metric: took 3.053659ms for node "ha-409851-m04" to be "Ready" ...
	I1120 21:48:19.374671  893814 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:48:19.374745  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:48:19.389451  893814 system_svc.go:56] duration metric: took 14.77112ms WaitForService to wait for kubelet
	I1120 21:48:19.389479  893814 kubeadm.go:587] duration metric: took 175.627603ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:48:19.389497  893814 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:48:19.393426  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393518  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393535  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393542  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393547  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393552  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393557  893814 node_conditions.go:105] duration metric: took 4.054434ms to run NodePressure ...
	I1120 21:48:19.393575  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:48:19.393603  893814 start.go:256] writing updated cluster config ...
	I1120 21:48:19.393953  893814 ssh_runner.go:195] Run: rm -f paused
	I1120 21:48:19.397987  893814 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:48:19.398502  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:48:19.416487  893814 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:48:21.424537  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:23.929996  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:26.423923  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:28.424118  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:30.923501  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:33.423121  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:35.423365  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:37.424719  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:39.923727  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:41.965360  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:44.435238  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:46.923403  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:48.923993  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:51.426397  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:53.924562  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:56.423976  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:58.431436  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:00.922387  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:02.923880  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:04.924121  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:07.423527  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:09.424675  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:11.922381  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:13.922686  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:15.923609  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:17.924006  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:20.423097  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:22.423996  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	I1120 21:49:23.424030  893814 pod_ready.go:94] pod "coredns-66bc5c9577-pjk6c" is "Ready"
	I1120 21:49:23.424063  893814 pod_ready.go:86] duration metric: took 1m4.007542805s for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.424073  893814 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.430119  893814 pod_ready.go:94] pod "coredns-66bc5c9577-vfsp6" is "Ready"
	I1120 21:49:23.430146  893814 pod_ready.go:86] duration metric: took 6.066348ms for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.434497  893814 pod_ready.go:83] waiting for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.442021  893814 pod_ready.go:94] pod "etcd-ha-409851" is "Ready"
	I1120 21:49:23.442059  893814 pod_ready.go:86] duration metric: took 7.532597ms for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.442070  893814 pod_ready.go:83] waiting for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.453471  893814 pod_ready.go:94] pod "etcd-ha-409851-m02" is "Ready"
	I1120 21:49:23.453510  893814 pod_ready.go:86] duration metric: took 11.432528ms for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.460522  893814 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.617970  893814 request.go:683] "Waited before sending request" delay="157.293328ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-409851"
	I1120 21:49:23.817544  893814 request.go:683] "Waited before sending request" delay="194.243021ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:23.820786  893814 pod_ready.go:94] pod "kube-apiserver-ha-409851" is "Ready"
	I1120 21:49:23.820814  893814 pod_ready.go:86] duration metric: took 360.266065ms for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.820823  893814 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.018232  893814 request.go:683] "Waited before sending request" delay="197.334029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-409851-m02"
	I1120 21:49:24.217808  893814 request.go:683] "Waited before sending request" delay="195.31208ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:24.220981  893814 pod_ready.go:94] pod "kube-apiserver-ha-409851-m02" is "Ready"
	I1120 21:49:24.221009  893814 pod_ready.go:86] duration metric: took 400.178739ms for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.418386  893814 request.go:683] "Waited before sending request" delay="197.22929ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 21:49:24.423065  893814 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.617542  893814 request.go:683] "Waited before sending request" delay="194.266332ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851"
	I1120 21:49:24.818451  893814 request.go:683] "Waited before sending request" delay="195.369435ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:24.821748  893814 pod_ready.go:94] pod "kube-controller-manager-ha-409851" is "Ready"
	I1120 21:49:24.821777  893814 pod_ready.go:86] duration metric: took 398.632324ms for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.821787  893814 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.018152  893814 request.go:683] "Waited before sending request" delay="196.257511ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m02"
	I1120 21:49:25.217440  893814 request.go:683] "Waited before sending request" delay="193.274434ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:25.221099  893814 pod_ready.go:94] pod "kube-controller-manager-ha-409851-m02" is "Ready"
	I1120 21:49:25.221184  893814 pod_ready.go:86] duration metric: took 399.388707ms for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.417592  893814 request.go:683] "Waited before sending request" delay="196.294697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1120 21:49:25.421901  893814 pod_ready.go:83] waiting for pod "kube-proxy-4qqxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.618261  893814 request.go:683] "Waited before sending request" delay="196.198417ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qqxh"
	I1120 21:49:25.818227  893814 request.go:683] "Waited before sending request" delay="195.266861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:25.822845  893814 pod_ready.go:94] pod "kube-proxy-4qqxh" is "Ready"
	I1120 21:49:25.822876  893814 pod_ready.go:86] duration metric: took 400.891774ms for pod "kube-proxy-4qqxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.822887  893814 pod_ready.go:83] waiting for pod "kube-proxy-pz7vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.018147  893814 request.go:683] "Waited before sending request" delay="195.181839ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz7vt"
	I1120 21:49:26.218218  893814 request.go:683] "Waited before sending request" delay="194.325204ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:26.221718  893814 pod_ready.go:94] pod "kube-proxy-pz7vt" is "Ready"
	I1120 21:49:26.221756  893814 pod_ready.go:86] duration metric: took 398.861103ms for pod "kube-proxy-pz7vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.221767  893814 pod_ready.go:83] waiting for pod "kube-proxy-xnhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.418209  893814 request.go:683] "Waited before sending request" delay="196.333755ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhl6"
	I1120 21:49:26.618151  893814 request.go:683] "Waited before sending request" delay="196.349344ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m04"
	I1120 21:49:26.623181  893814 pod_ready.go:94] pod "kube-proxy-xnhl6" is "Ready"
	I1120 21:49:26.623210  893814 pod_ready.go:86] duration metric: took 401.436889ms for pod "kube-proxy-xnhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.817459  893814 request.go:683] "Waited before sending request" delay="194.131676ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1120 21:49:26.821013  893814 pod_ready.go:83] waiting for pod "kube-scheduler-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.018492  893814 request.go:683] "Waited before sending request" delay="197.322386ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851"
	I1120 21:49:27.217513  893814 request.go:683] "Waited before sending request" delay="190.181719ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:27.226443  893814 pod_ready.go:94] pod "kube-scheduler-ha-409851" is "Ready"
	I1120 21:49:27.226520  893814 pod_ready.go:86] duration metric: took 405.47524ms for pod "kube-scheduler-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.226546  893814 pod_ready.go:83] waiting for pod "kube-scheduler-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.417983  893814 request.go:683] "Waited before sending request" delay="191.325659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851-m02"
	I1120 21:49:27.618140  893814 request.go:683] "Waited before sending request" delay="196.249535ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:27.817620  893814 request.go:683] "Waited before sending request" delay="90.393989ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851-m02"
	I1120 21:49:28.018196  893814 request.go:683] "Waited before sending request" delay="197.189707ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:28.417767  893814 request.go:683] "Waited before sending request" delay="186.33455ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:28.817959  893814 request.go:683] "Waited before sending request" delay="87.275796ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	W1120 21:49:29.233343  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:31.233779  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:33.234413  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:35.733284  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:38.233049  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:40.233361  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:42.235442  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:44.734815  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:47.232729  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:49.233113  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:51.234068  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:53.732962  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:56.233319  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:58.734472  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:01.234009  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:03.234832  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:05.733469  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:08.234179  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:10.735546  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:12.735872  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:14.736374  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:16.740445  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:19.233806  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:21.733741  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:23.735456  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:26.232453  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:28.233317  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:30.735024  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:32.735868  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:35.234232  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:37.734207  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:40.234052  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:42.240134  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:44.733059  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:46.733334  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:48.738389  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:51.233067  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:53.234660  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:55.733852  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:57.734484  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:00.249903  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:02.732606  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:04.736105  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:07.233350  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:09.733211  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:11.733392  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:14.234536  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:16.732259  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:18.735892  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:20.735996  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:23.234680  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:25.733375  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:27.733961  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:29.735523  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:32.236382  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:34.733336  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:36.733744  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:38.734442  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:40.734588  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:42.734796  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:44.735137  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:46.736111  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:49.233632  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:51.733070  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:53.734822  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:56.233800  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:58.234379  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:00.264529  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:02.742360  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:05.233819  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:07.733077  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:09.734867  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:12.233625  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:14.733387  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:16.734342  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:18.734797  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	I1120 21:52:19.398473  893814 pod_ready.go:86] duration metric: took 2m52.171896252s for pod "kube-scheduler-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:52:19.398508  893814 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 21:52:19.398524  893814 pod_ready.go:40] duration metric: took 4m0.000499103s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:52:19.401528  893814 out.go:203] 
	W1120 21:52:19.404511  893814 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 21:52:19.407414  893814 out.go:203] 
	
	
	==> CRI-O <==
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.811470727Z" level=info msg="Running pod sandbox: kube-system/kindnet-7hmbf/POD" id=28bea4ad-45c7-4ae7-92e7-809ca92ae1f4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.811536598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.815250925Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=28bea4ad-45c7-4ae7-92e7-809ca92ae1f4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.818484951Z" level=info msg="Ran pod sandbox b2d79927049c127d9e5f12aca58d594c8f613b055eb5c07f7c0ebe2467920bdb with infra container: kube-system/kindnet-7hmbf/POD" id=28bea4ad-45c7-4ae7-92e7-809ca92ae1f4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.820409438Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=de13f0e7-3c4a-42d5-9c8d-3a3bc426d7fd name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.826704318Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f2666544-b5e7-4f59-a2f3-144082db7373 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.828605429Z" level=info msg="Creating container: kube-system/kindnet-7hmbf/kindnet-cni" id=fa91b507-57b0-4587-9812-2928e0280a62 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.829288957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.834469699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.835169227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.85382609Z" level=info msg="Created container bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454: kube-system/kindnet-7hmbf/kindnet-cni" id=fa91b507-57b0-4587-9812-2928e0280a62 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.854825659Z" level=info msg="Starting container: bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454" id=c468e3c9-d4e5-493c-bfd8-7edc351197ab name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.859192598Z" level=info msg="Started container" PID=1405 containerID=bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454 description=kube-system/kindnet-7hmbf/kindnet-cni id=c468e3c9-d4e5-493c-bfd8-7edc351197ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=b2d79927049c127d9e5f12aca58d594c8f613b055eb5c07f7c0ebe2467920bdb
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.206856782Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.210460298Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.21049604Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.210517833Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.213977617Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.214129201Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.214171162Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.217329445Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.217362923Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.217385791Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.220578314Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.220610922Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	bad91fe692656       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 minutes ago       Running             kindnet-cni               2                   b2d79927049c1       kindnet-7hmbf                       kube-system
	45150399abc60       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   3 minutes ago       Running             busybox                   2                   86a0aabe892ba       busybox-7b57f96db7-mgvhj            default
	282f28167fcd8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       3                   cf9b9178a22be       storage-provisioner                 kube-system
	283abd913ff4d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 minutes ago       Running             kube-proxy                2                   51827a0562eaa       kube-proxy-4qqxh                    kube-system
	3064e4d2cac3e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   3 minutes ago       Running             coredns                   2                   f1efa47298912       coredns-66bc5c9577-pjk6c            kube-system
	474e5b9d1f070       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   3 minutes ago       Running             coredns                   2                   fb899ea594eab       coredns-66bc5c9577-vfsp6            kube-system
	5ccb03706c0f4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Running             kube-controller-manager   7                   5ac2d22e0c15f       kube-controller-manager-ha-409851   kube-system
	53d8cbac386fc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Exited              kube-controller-manager   6                   5ac2d22e0c15f       kube-controller-manager-ha-409851   kube-system
	21eb6c12eb9d6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   4 minutes ago       Running             kube-apiserver            4                   11a0f49f5bc02       kube-apiserver-ha-409851            kube-system
	e758e4601a79a       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Running             kube-vip                  2                   276d004d64a0f       kube-vip-ha-409851                  kube-system
	bf7fd293f188a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            2                   251d917d7ecb8       kube-scheduler-ha-409851            kube-system
	29879cb03dd0a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      2                   44edbb77d8632       etcd-ha-409851                      kube-system
	d2a9e01261d92       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            3                   11a0f49f5bc02       kube-apiserver-ha-409851            kube-system
	
	
	==> coredns [3064e4d2cac3e067a0a0ba1353e3b89a5da11e7e5a320f683346febeadfbb73a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40971 - 38824 "HINFO IN 3995400066811168115.5738602718581230250. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004050865s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [474e5b9d1f07007a252c22fb0e9172e8fd3235037aecc813a1d66128aa8e0d26] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46282 - 18255 "HINFO IN 2304188649282025477.3571330681415947141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021110391s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-409851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_32_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:32:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:52:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:51:49 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:51:49 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:51:49 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:51:49 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-409851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                1f114e92-c1bf-4c10-9121-0a6c185877b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mgvhj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-pjk6c             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 coredns-66bc5c9577-vfsp6             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-ha-409851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-7hmbf                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-409851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-409851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-4qqxh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-409851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-409851                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 3m37s                kube-proxy       
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 12m                  kube-proxy       
	  Normal   Starting                 19m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     19m (x8 over 19m)    kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)    kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  19m (x8 over 19m)    kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   Starting                 19m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m                  kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                  kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m                  kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           19m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-409851 status is now: NodeReady
	  Normal   RegisteredNode           17m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Warning  CgroupV1                 13m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 13m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)    kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)    kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)    kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   NodeHasSufficientMemory  6m2s (x8 over 6m2s)  kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m2s (x8 over 6m2s)  kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m2s (x8 over 6m2s)  kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m10s                node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           3m37s                node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	
	
	Name:               ha-409851-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_33_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:52:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:51:21 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:51:21 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:51:21 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:51:21 +0000   Thu, 20 Nov 2025 21:34:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-409851-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3904cc8f-d8d1-4880-8dca-3fb5e1048dff
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hqh2f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-409851-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-56lr8                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-409851-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-409851-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-pz7vt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-409851-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-409851-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 18m                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 3m38s                  kube-proxy       
	  Normal   RegisteredNode           19m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   Starting                 5m59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m59s (x8 over 5m59s)  kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m59s (x8 over 5m59s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        4m59s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           3m37s                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	
	
	Name:               ha-409851-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_35_59_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:52:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:51:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:51:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:51:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:51:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-409851-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                2c1b4976-2a70-4f78-8646-ed9804d613b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-snllw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 kindnet-2d5r9               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-proxy-xnhl6            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 3m49s                kube-proxy       
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   Starting                 10m                  kube-proxy       
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m (x3 over 16m)    kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x3 over 16m)    kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x3 over 16m)    kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeReady                15m                  kubelet          Node ha-409851-m04 status is now: NodeReady
	  Normal   RegisteredNode           14m                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeNotReady             12m                  node-controller  Node ha-409851-m04 status is now: NodeNotReady
	  Normal   Starting                 11m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)    kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m10s                node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   Starting                 4m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m5s (x8 over 4m8s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m5s (x8 over 4m8s)  kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m5s (x8 over 4m8s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m37s                node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	
	
	==> dmesg <==
	[Nov20 19:53] overlayfs: idmapped layers are currently not supported
	[  +2.035111] overlayfs: idmapped layers are currently not supported
	[Nov20 19:54] overlayfs: idmapped layers are currently not supported
	[Nov20 19:55] overlayfs: idmapped layers are currently not supported
	[Nov20 19:56] overlayfs: idmapped layers are currently not supported
	[Nov20 19:57] overlayfs: idmapped layers are currently not supported
	[Nov20 19:58] overlayfs: idmapped layers are currently not supported
	[Nov20 19:59] overlayfs: idmapped layers are currently not supported
	[Nov20 20:04] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:08] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:11] overlayfs: idmapped layers are currently not supported
	[Nov20 21:17] overlayfs: idmapped layers are currently not supported
	[Nov20 21:18] overlayfs: idmapped layers are currently not supported
	[Nov20 21:32] overlayfs: idmapped layers are currently not supported
	[Nov20 21:33] overlayfs: idmapped layers are currently not supported
	[Nov20 21:34] overlayfs: idmapped layers are currently not supported
	[Nov20 21:36] overlayfs: idmapped layers are currently not supported
	[Nov20 21:37] overlayfs: idmapped layers are currently not supported
	[Nov20 21:38] overlayfs: idmapped layers are currently not supported
	[  +3.034217] overlayfs: idmapped layers are currently not supported
	[Nov20 21:39] overlayfs: idmapped layers are currently not supported
	[Nov20 21:41] overlayfs: idmapped layers are currently not supported
	[Nov20 21:46] overlayfs: idmapped layers are currently not supported
	[  +2.922279] overlayfs: idmapped layers are currently not supported
	[Nov20 21:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [29879cb03dd0a43326e4e6e94a9bec4cf49f8356cb3cf208c0a562ed783bb2de] <==
	{"level":"info","ts":"2025-11-20T21:48:04.987262Z","caller":"traceutil/trace.go:172","msg":"trace[1675718777] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:3453; }","duration":"117.030077ms","start":"2025-11-20T21:48:04.870220Z","end":"2025-11-20T21:48:04.987250Z","steps":["trace[1675718777] 'agreement among raft nodes before linearized reading'  (duration: 108.221542ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:04.997350Z","caller":"traceutil/trace.go:172","msg":"trace[1770117129] range","detail":"{range_begin:/registry/servicecidrs; range_end:; response_count:0; response_revision:3453; }","duration":"121.253555ms","start":"2025-11-20T21:48:04.876071Z","end":"2025-11-20T21:48:04.997324Z","steps":["trace[1770117129] 'agreement among raft nodes before linearized reading'  (duration: 102.33561ms)","trace[1770117129] 'range keys from in-memory index tree'  (duration: 18.887036ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T21:48:05.098010Z","caller":"traceutil/trace.go:172","msg":"trace[2037113995] range","detail":"{range_begin:/registry/ingressclasses; range_end:; response_count:0; response_revision:3453; }","duration":"102.975273ms","start":"2025-11-20T21:48:04.995024Z","end":"2025-11-20T21:48:05.098000Z","steps":["trace[2037113995] 'agreement among raft nodes before linearized reading'  (duration: 102.942698ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100356Z","caller":"traceutil/trace.go:172","msg":"trace[162038184] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:3453; }","duration":"111.304394ms","start":"2025-11-20T21:48:04.989041Z","end":"2025-11-20T21:48:05.100345Z","steps":["trace[162038184] 'agreement among raft nodes before linearized reading'  (duration: 111.259043ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100414Z","caller":"traceutil/trace.go:172","msg":"trace[1479816564] range","detail":"{range_begin:/registry/deviceclasses/; range_end:/registry/deviceclasses0; response_count:0; response_revision:3453; }","duration":"122.163392ms","start":"2025-11-20T21:48:04.978245Z","end":"2025-11-20T21:48:05.100409Z","steps":["trace[1479816564] 'agreement among raft nodes before linearized reading'  (duration: 122.142174ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100443Z","caller":"traceutil/trace.go:172","msg":"trace[1071692997] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:3453; }","duration":"122.210957ms","start":"2025-11-20T21:48:04.978228Z","end":"2025-11-20T21:48:05.100439Z","steps":["trace[1071692997] 'agreement among raft nodes before linearized reading'  (duration: 122.195835ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100470Z","caller":"traceutil/trace.go:172","msg":"trace[321870719] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:3453; }","duration":"122.649806ms","start":"2025-11-20T21:48:04.977816Z","end":"2025-11-20T21:48:05.100466Z","steps":["trace[321870719] 'agreement among raft nodes before linearized reading'  (duration: 122.636702ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100504Z","caller":"traceutil/trace.go:172","msg":"trace[391658353] range","detail":"{range_begin:/registry/volumeattributesclasses/; range_end:/registry/volumeattributesclasses0; response_count:0; response_revision:3453; }","duration":"122.764745ms","start":"2025-11-20T21:48:04.977735Z","end":"2025-11-20T21:48:05.100500Z","steps":["trace[391658353] 'agreement among raft nodes before linearized reading'  (duration: 122.746931ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100571Z","caller":"traceutil/trace.go:172","msg":"trace[1747834679] range","detail":"{range_begin:compact_rev_key; range_end:; response_count:1; response_revision:3453; }","duration":"122.847642ms","start":"2025-11-20T21:48:04.977719Z","end":"2025-11-20T21:48:05.100567Z","steps":["trace[1747834679] 'agreement among raft nodes before linearized reading'  (duration: 122.792413ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100602Z","caller":"traceutil/trace.go:172","msg":"trace[994787852] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:3453; }","duration":"123.045857ms","start":"2025-11-20T21:48:04.977552Z","end":"2025-11-20T21:48:05.100598Z","steps":["trace[994787852] 'agreement among raft nodes before linearized reading'  (duration: 123.029184ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100652Z","caller":"traceutil/trace.go:172","msg":"trace[1075319704] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:3453; }","duration":"123.113213ms","start":"2025-11-20T21:48:04.977533Z","end":"2025-11-20T21:48:05.100646Z","steps":["trace[1075319704] 'agreement among raft nodes before linearized reading'  (duration: 123.079128ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100678Z","caller":"traceutil/trace.go:172","msg":"trace[1734896502] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:3453; }","duration":"123.161287ms","start":"2025-11-20T21:48:04.977513Z","end":"2025-11-20T21:48:05.100674Z","steps":["trace[1734896502] 'agreement among raft nodes before linearized reading'  (duration: 123.149406ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100726Z","caller":"traceutil/trace.go:172","msg":"trace[65494134] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:2; response_revision:3453; }","duration":"123.22569ms","start":"2025-11-20T21:48:04.977496Z","end":"2025-11-20T21:48:05.100722Z","steps":["trace[65494134] 'agreement among raft nodes before linearized reading'  (duration: 123.189883ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100749Z","caller":"traceutil/trace.go:172","msg":"trace[946885568] range","detail":"{range_begin:/registry/priorityclasses; range_end:; response_count:0; response_revision:3453; }","duration":"123.29692ms","start":"2025-11-20T21:48:04.977448Z","end":"2025-11-20T21:48:05.100745Z","steps":["trace[946885568] 'agreement among raft nodes before linearized reading'  (duration: 123.287205ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100772Z","caller":"traceutil/trace.go:172","msg":"trace[1602857348] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:3453; }","duration":"123.339439ms","start":"2025-11-20T21:48:04.977429Z","end":"2025-11-20T21:48:05.100768Z","steps":["trace[1602857348] 'agreement among raft nodes before linearized reading'  (duration: 123.328403ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100835Z","caller":"traceutil/trace.go:172","msg":"trace[1657109007] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:8; response_revision:3453; }","duration":"123.41807ms","start":"2025-11-20T21:48:04.977413Z","end":"2025-11-20T21:48:05.100831Z","steps":["trace[1657109007] 'agreement among raft nodes before linearized reading'  (duration: 123.366041ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100863Z","caller":"traceutil/trace.go:172","msg":"trace[256739583] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:3453; }","duration":"123.461788ms","start":"2025-11-20T21:48:04.977397Z","end":"2025-11-20T21:48:05.100859Z","steps":["trace[256739583] 'agreement among raft nodes before linearized reading'  (duration: 123.448233ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100889Z","caller":"traceutil/trace.go:172","msg":"trace[157362704] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:3453; }","duration":"123.504331ms","start":"2025-11-20T21:48:04.977378Z","end":"2025-11-20T21:48:05.100882Z","steps":["trace[157362704] 'agreement among raft nodes before linearized reading'  (duration: 123.492729ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100936Z","caller":"traceutil/trace.go:172","msg":"trace[439810993] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:2; response_revision:3453; }","duration":"123.968846ms","start":"2025-11-20T21:48:04.976963Z","end":"2025-11-20T21:48:05.100932Z","steps":["trace[439810993] 'agreement among raft nodes before linearized reading'  (duration: 123.933875ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100963Z","caller":"traceutil/trace.go:172","msg":"trace[1409698449] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:3453; }","duration":"124.019924ms","start":"2025-11-20T21:48:04.976938Z","end":"2025-11-20T21:48:05.100958Z","steps":["trace[1409698449] 'agreement among raft nodes before linearized reading'  (duration: 124.006566ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100988Z","caller":"traceutil/trace.go:172","msg":"trace[1232400826] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:3453; }","duration":"124.21593ms","start":"2025-11-20T21:48:04.976768Z","end":"2025-11-20T21:48:05.100984Z","steps":["trace[1232400826] 'agreement among raft nodes before linearized reading'  (duration: 124.203794ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.101052Z","caller":"traceutil/trace.go:172","msg":"trace[1428873499] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:7; response_revision:3453; }","duration":"124.603382ms","start":"2025-11-20T21:48:04.976444Z","end":"2025-11-20T21:48:05.101048Z","steps":["trace[1428873499] 'agreement among raft nodes before linearized reading'  (duration: 124.551451ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.101076Z","caller":"traceutil/trace.go:172","msg":"trace[1456894827] range","detail":"{range_begin:/registry/leases; range_end:; response_count:0; response_revision:3453; }","duration":"125.518633ms","start":"2025-11-20T21:48:04.975553Z","end":"2025-11-20T21:48:05.101072Z","steps":["trace[1456894827] 'agreement among raft nodes before linearized reading'  (duration: 125.507408ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.163635Z","caller":"traceutil/trace.go:172","msg":"trace[1396962058] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:3453; }","duration":"124.48073ms","start":"2025-11-20T21:48:05.039143Z","end":"2025-11-20T21:48:05.163623Z","steps":["trace[1396962058] 'agreement among raft nodes before linearized reading'  (duration: 124.42829ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.163909Z","caller":"traceutil/trace.go:172","msg":"trace[247699851] range","detail":"{range_begin:/registry/podtemplates; range_end:; response_count:0; response_revision:3453; }","duration":"128.382177ms","start":"2025-11-20T21:48:05.035520Z","end":"2025-11-20T21:48:05.163902Z","steps":["trace[247699851] 'agreement among raft nodes before linearized reading'  (duration: 128.353606ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:52:21 up  4:34,  0 user,  load average: 0.37, 0.91, 1.26
	Linux ha-409851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454] <==
	I1120 21:51:36.212698       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:51:46.206671       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:51:46.206714       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:51:46.206892       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:51:46.206910       1 main.go:301] handling current node
	I1120 21:51:46.206925       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:51:46.206929       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:51:56.208319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:51:56.208360       1 main.go:301] handling current node
	I1120 21:51:56.208376       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:51:56.208382       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:51:56.208532       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:51:56.208547       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:52:06.212796       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:52:06.212831       1 main.go:301] handling current node
	I1120 21:52:06.212847       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:52:06.212853       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:52:06.213011       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:52:06.213024       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:52:16.213240       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:52:16.213273       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:52:16.213426       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:52:16.213439       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:52:16.213508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:52:16.213519       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21eb6c12eb9d6c645ff79035e852942fc36d120d38e6634372d84d1fff4b1c3a] <==
	I1120 21:48:05.164517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:48:05.251597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:48:05.267215       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:48:05.273069       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:48:05.273181       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:48:05.301644       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:48:05.303022       1 policy_source.go:240] refreshing policies
	I1120 21:48:05.343504       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:48:05.343769       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:48:05.344234       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:48:05.350900       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1120 21:48:05.361480       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:48:05.362670       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:48:05.370720       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:48:05.362690       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:48:11.243570       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 21:48:11.243643       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:48:11.543897       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	W1120 21:48:11.986847       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1120 21:48:11.988628       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:48:11.996638       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:48:31.545364       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:48:44.311228       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:48:46.301552       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:49:23.280882       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [d2a9e01261d927422239ac6d8aae4c4810c85777bd6fc37ddc5126a51deff4dd] <==
	{"level":"warn","ts":"2025-11-20T21:47:25.675429Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016b65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675510Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675578Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002cd61e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675620Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400212da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675648Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013d9860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675671Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000797860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675698Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400224d680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675596Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40007970e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675739Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019532c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675766Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016b6d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675801Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675829Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400276c780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675854Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675804Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013d83c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675908Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675946Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675911Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.827032Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400212da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1120 21:47:25.827154       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1120 21:47:25.827227       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1120 21:47:25.828931       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1120 21:47:25.828993       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1120 21:47:25.830257       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.94329ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	{"level":"warn","ts":"2025-11-20T21:47:26.843128Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400212da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	F1120 21:47:27.272727       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43] <==
	I1120 21:47:44.271187       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:47:45.887863       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1120 21:47:45.887899       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:47:45.889312       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1120 21:47:45.889482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1120 21:47:45.889741       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1120 21:47:45.889803       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 21:47:55.905939       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [5ccb03706c0f435e1a09ff9e7ebbe19aee8f89c6e7467182aa27e3874e6c323d] <==
	I1120 21:48:44.191236       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:48:44.191247       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 21:48:44.192321       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:48:44.194569       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:48:44.194593       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:48:44.194667       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:48:44.196895       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 21:48:44.197845       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:48:44.200483       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:48:44.201695       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:48:44.201862       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:48:44.201975       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m04"
	I1120 21:48:44.202045       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851"
	I1120 21:48:44.202137       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m02"
	I1120 21:48:44.202200       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 21:48:44.213792       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:48:44.217890       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:48:44.217972       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:48:44.218002       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:48:44.234704       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:49:23.353198       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v6gm\": the object has been modified; please apply your changes to the latest version and try again"
	I1120 21:49:23.353878       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"21992042-f6b2-485a-bd9b-decc3a3d6f7e", APIVersion:"v1", ResourceVersion:"294", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v6gm": the object has been modified; please apply your changes to the latest version and try again
	E1120 21:49:23.376944       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1120 21:49:23.392884       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v6gm\": the object has been modified; please apply your changes to the latest version and try again"
	I1120 21:49:23.393588       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"21992042-f6b2-485a-bd9b-decc3a3d6f7e", APIVersion:"v1", ResourceVersion:"294", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v6gm": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [283abd913ff4d5c1081b76097b71e66eb996220513fadc607f8f68cd50071785] <==
	I1120 21:48:42.954042       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:48:43.040713       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:48:43.141728       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:48:43.141763       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 21:48:43.141860       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:48:43.160133       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:48:43.160188       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:48:43.163678       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:48:43.163975       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:48:43.164011       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:48:43.168077       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:48:43.168182       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:48:43.168489       1 config.go:200] "Starting service config controller"
	I1120 21:48:43.168532       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:48:43.169345       1 config.go:309] "Starting node config controller"
	I1120 21:48:43.169359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:48:43.169367       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:48:43.172283       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:48:43.172357       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:48:43.268742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:48:43.268898       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:48:43.272772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bf7fd293f188a4c3116512ca8739e3ae57f6b6ac6e8e5e7a7e493804caba0ede] <==
	E1120 21:47:42.144862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:47:42.442641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:47:42.927579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:47:43.326155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:47:43.512114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:47:44.079747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:47:44.466132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:47:51.236636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:47:53.441273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:47:53.443366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:47:55.204767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:47:56.179669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:47:56.809409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:47:58.566654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:47:58.739996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:47:59.402329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 21:47:59.593992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:48:00.869852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:48:01.061027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:48:01.453651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:48:03.292850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:48:03.733908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:48:03.942583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:48:04.337599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:48:05.178246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kubelet <==
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.102858     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-7hmbf\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="562945a4-84ec-46c8-b77e-abdd9d577c9c" pod="kube-system/kindnet-7hmbf"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.116790     805 kubelet_node_status.go:124] "Node was previously registered" node="ha-409851"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.116930     805 kubelet_node_status.go:78] "Successfully registered node" node="ha-409851"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.116963     805 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.117831     805 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.123111     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"storage-provisioner\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="349c85dc-6341-43ab-b388-8734d72e3040" pod="kube-system/storage-provisioner"
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.167806     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-vip-ha-409851\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="6f4588d400318593d47cec16914af85c" pod="kube-system/kube-vip-ha-409851"
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.254640     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-4qqxh\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="2f7683fa-0199-444f-bcf4-42666203c1fa" pod="kube-system/kube-proxy-4qqxh"
	Nov 20 21:48:14 ha-409851 kubelet[805]: I1120 21:48:14.806712     805 scope.go:117] "RemoveContainer" containerID="53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43"
	Nov 20 21:48:14 ha-409851 kubelet[805]: E1120 21:48:14.806952     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-409851_kube-system(69faa2bc5061adf58d981ecf300e1cf6)\"" pod="kube-system/kube-controller-manager-ha-409851" podUID="69faa2bc5061adf58d981ecf300e1cf6"
	Nov 20 21:48:19 ha-409851 kubelet[805]: E1120 21:48:19.826466     805 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/53ae0ada8ee6b87a83c12c535b4145c039ace4d83202156f4f2fa970dd2c3e8a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/53ae0ada8ee6b87a83c12c535b4145c039ace4d83202156f4f2fa970dd2c3e8a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-409851_69faa2bc5061adf58d981ecf300e1cf6/kube-controller-manager/4.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-409851_69faa2bc5061adf58d981ecf300e1cf6/kube-controller-manager/4.log: no such file or directory
	Nov 20 21:48:26 ha-409851 kubelet[805]: I1120 21:48:26.807409     805 scope.go:117] "RemoveContainer" containerID="53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43"
	Nov 20 21:48:26 ha-409851 kubelet[805]: E1120 21:48:26.807617     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-409851_kube-system(69faa2bc5061adf58d981ecf300e1cf6)\"" pod="kube-system/kube-controller-manager-ha-409851" podUID="69faa2bc5061adf58d981ecf300e1cf6"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.761938     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jvsfx], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-66bc5c9577-vfsp6" podUID="09c1e0dd-0208-4f69-aac9-670197f4c848"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.767157     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cg4c6], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-66bc5c9577-pjk6c" podUID="ad25e130-cf9b-4f5e-b082-23c452bd1c5c"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.767157     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-rjfpv], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/kube-proxy-4qqxh" podUID="2f7683fa-0199-444f-bcf4-42666203c1fa"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.767309     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ndpsr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/kindnet-7hmbf" podUID="562945a4-84ec-46c8-b77e-abdd9d577c9c"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.768337     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jlbcp], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/storage-provisioner" podUID="349c85dc-6341-43ab-b388-8734d72e3040"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.768345     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-t5g2b], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="default/busybox-7b57f96db7-mgvhj" podUID="79106a87-339a-4b68-ad4e-12ef6b0b03ca"
	Nov 20 21:48:34 ha-409851 kubelet[805]: I1120 21:48:34.138084     805 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:48:39 ha-409851 kubelet[805]: I1120 21:48:39.807902     805 scope.go:117] "RemoveContainer" containerID="53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43"
	Nov 20 21:48:41 ha-409851 kubelet[805]: W1120 21:48:41.897097     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-fb899ea594eab05a10c91ed517e7df9f9aa7e6bbc83170c8c51036525a7aed49 WatchSource:0}: Error finding container fb899ea594eab05a10c91ed517e7df9f9aa7e6bbc83170c8c51036525a7aed49: Status 404 returned error can't find the container with id fb899ea594eab05a10c91ed517e7df9f9aa7e6bbc83170c8c51036525a7aed49
	Nov 20 21:48:41 ha-409851 kubelet[805]: W1120 21:48:41.904639     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-f1efa472989129538dbd146ad9e60aeb226bfae7468050404be039e9aa155b4b WatchSource:0}: Error finding container f1efa472989129538dbd146ad9e60aeb226bfae7468050404be039e9aa155b4b: Status 404 returned error can't find the container with id f1efa472989129538dbd146ad9e60aeb226bfae7468050404be039e9aa155b4b
	Nov 20 21:48:42 ha-409851 kubelet[805]: W1120 21:48:42.819704     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-51827a0562eaacba39d1f56d5c992f9b9551bbe843e39c04d20a809fcd02d0ac WatchSource:0}: Error finding container 51827a0562eaacba39d1f56d5c992f9b9551bbe843e39c04d20a809fcd02d0ac: Status 404 returned error can't find the container with id 51827a0562eaacba39d1f56d5c992f9b9551bbe843e39c04d20a809fcd02d0ac
	Nov 20 21:48:43 ha-409851 kubelet[805]: W1120 21:48:43.900976     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-86a0aabe892baf40a6d3f1f4805dc511b99e67d4fc88a0ce7ab2313ee6a4c7ce WatchSource:0}: Error finding container 86a0aabe892baf40a6d3f1f4805dc511b99e67d4fc88a0ce7ab2313ee6a4c7ce: Status 404 returned error can't find the container with id 86a0aabe892baf40a6d3f1f4805dc511b99e67d4fc88a0ce7ab2313ee6a4c7ce
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-409851 -n ha-409851
helpers_test.go:269: (dbg) Run:  kubectl --context ha-409851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (369.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-409851" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-409851\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-409851\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-409851\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-409851
helpers_test.go:243: (dbg) docker inspect ha-409851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	        "Created": "2025-11-20T21:32:05.722530265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 893938,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:46:13.072458678Z",
	            "FinishedAt": "2025-11-20T21:46:12.348513553Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hostname",
	        "HostsPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hosts",
	        "LogPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853-json.log",
	        "Name": "/ha-409851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-409851:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-409851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	                "LowerDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-409851",
	                "Source": "/var/lib/docker/volumes/ha-409851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-409851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-409851",
	                "name.minikube.sigs.k8s.io": "ha-409851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc18c8f3af5088b5bb1d9ce24d0b962e6479dd84027377689edccf3f48baefb2",
	            "SandboxKey": "/var/run/docker/netns/cc18c8f3af50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33940"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-409851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:23:29:98:04:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad232b357b1bc65babf7a48f3581b00686ef0ccc0f86acee1a57f8a071f682f1",
	                    "EndpointID": "42281e0852c3f6fd3ef3ee7cb17a8b94df54edc9c35c3a29e94bd1eb0ceadb4a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-409851",
	                        "d20916d298c9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-409851 -n ha-409851
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 logs -n 25: (1.47948979s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-409851 cp ha-409851-m03:/home/docker/cp-test.txt ha-409851-m04:/home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ cp      │ ha-409851 cp testdata/cp-test.txt ha-409851-m04:/home/docker/cp-test.txt                                                            │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668750254/001/cp-test_ha-409851-m04.txt │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851:/home/docker/cp-test_ha-409851-m04_ha-409851.txt                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851.txt                                                │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m02:/home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m02 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m03:/home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node start m02 --alsologtostderr -v 5                                                                                     │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │                     │
	│ stop    │ ha-409851 stop --alsologtostderr -v 5                                                                                               │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:38 UTC │
	│ start   │ ha-409851 start --wait true --alsologtostderr -v 5                                                                                  │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:38 UTC │                     │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │                     │
	│ node    │ ha-409851 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │ 20 Nov 25 21:45 UTC │
	│ stop    │ ha-409851 stop --alsologtostderr -v 5                                                                                               │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │ 20 Nov 25 21:46 UTC │
	│ start   │ ha-409851 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:46:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:46:12.791438  893814 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:46:12.791547  893814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.791556  893814 out.go:374] Setting ErrFile to fd 2...
	I1120 21:46:12.791561  893814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.791812  893814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:46:12.792153  893814 out.go:368] Setting JSON to false
	I1120 21:46:12.792975  893814 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16098,"bootTime":1763659075,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:46:12.793039  893814 start.go:143] virtualization:  
	I1120 21:46:12.796567  893814 out.go:179] * [ha-409851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:46:12.800274  893814 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:46:12.800333  893814 notify.go:221] Checking for updates...
	I1120 21:46:12.805930  893814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:46:12.808740  893814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:12.811665  893814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:46:12.814590  893814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:46:12.817489  893814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:46:12.820869  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:12.821456  893814 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:46:12.854504  893814 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:46:12.854629  893814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:46:12.916245  893814 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:46:12.907017867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:46:12.916354  893814 docker.go:319] overlay module found
	I1120 21:46:12.921281  893814 out.go:179] * Using the docker driver based on existing profile
	I1120 21:46:12.924086  893814 start.go:309] selected driver: docker
	I1120 21:46:12.924103  893814 start.go:930] validating driver "docker" against &{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:12.924235  893814 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:46:12.924335  893814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:46:12.982109  893814 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:46:12.972838498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:46:12.982542  893814 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:46:12.982605  893814 cni.go:84] Creating CNI manager for ""
	I1120 21:46:12.982654  893814 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1120 21:46:12.982705  893814 start.go:353] cluster config:
	{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:12.987881  893814 out.go:179] * Starting "ha-409851" primary control-plane node in "ha-409851" cluster
	I1120 21:46:12.990803  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:46:12.993745  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:46:12.996606  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:12.996692  893814 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:46:12.996690  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:46:12.996704  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:46:12.996891  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:46:12.996899  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:46:12.997043  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:13.017636  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:46:13.017661  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:46:13.017680  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:46:13.017708  893814 start.go:360] acquireMachinesLock for ha-409851: {Name:mk8d4d263fd846febb903e54335147f9d639d302 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:46:13.017784  893814 start.go:364] duration metric: took 50.068µs to acquireMachinesLock for "ha-409851"
	I1120 21:46:13.017814  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:46:13.017825  893814 fix.go:54] fixHost starting: 
	I1120 21:46:13.018084  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:46:13.035594  893814 fix.go:112] recreateIfNeeded on ha-409851: state=Stopped err=<nil>
	W1120 21:46:13.035627  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:46:13.038907  893814 out.go:252] * Restarting existing docker container for "ha-409851" ...
	I1120 21:46:13.039022  893814 cli_runner.go:164] Run: docker start ha-409851
	I1120 21:46:13.304460  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:46:13.328120  893814 kic.go:430] container "ha-409851" state is running.
	I1120 21:46:13.328719  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:13.354344  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:13.354582  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:46:13.354651  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:13.379550  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:13.379870  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:13.379890  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:46:13.380728  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:46:16.522806  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:46:16.522896  893814 ubuntu.go:182] provisioning hostname "ha-409851"
	I1120 21:46:16.523007  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.540197  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:16.540514  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:16.540535  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851 && echo "ha-409851" | sudo tee /etc/hostname
	I1120 21:46:16.694351  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:46:16.694434  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.711779  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:16.712102  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:16.712124  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:46:16.851168  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:46:16.851196  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:46:16.851221  893814 ubuntu.go:190] setting up certificates
	I1120 21:46:16.851230  893814 provision.go:84] configureAuth start
	I1120 21:46:16.851299  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:16.868945  893814 provision.go:143] copyHostCerts
	I1120 21:46:16.868995  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:16.869035  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:46:16.869055  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:16.869140  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:46:16.869236  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:16.869258  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:46:16.869266  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:16.869304  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:46:16.869353  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:16.869373  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:46:16.869384  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:16.869416  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:46:16.869469  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851 san=[127.0.0.1 192.168.49.2 ha-409851 localhost minikube]
	I1120 21:46:16.952356  893814 provision.go:177] copyRemoteCerts
	I1120 21:46:16.952425  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:46:16.952478  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.973308  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.074564  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:46:17.074634  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1120 21:46:17.091858  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:46:17.091917  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:46:17.109606  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:46:17.109674  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:46:17.127878  893814 provision.go:87] duration metric: took 276.622438ms to configureAuth
	I1120 21:46:17.127903  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:46:17.128138  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:17.128246  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.145230  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:17.145555  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:17.145568  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:46:17.521503  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:46:17.521523  893814 machine.go:97] duration metric: took 4.166931199s to provisionDockerMachine
	I1120 21:46:17.521535  893814 start.go:293] postStartSetup for "ha-409851" (driver="docker")
	I1120 21:46:17.521545  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:46:17.521607  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:46:17.521648  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.543040  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.642924  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:46:17.646266  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:46:17.646295  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:46:17.646306  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:46:17.646362  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:46:17.646441  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:46:17.646453  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:46:17.646557  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:46:17.654029  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:17.671759  893814 start.go:296] duration metric: took 150.208491ms for postStartSetup
	I1120 21:46:17.671861  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:46:17.671903  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.688970  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.788149  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:46:17.792950  893814 fix.go:56] duration metric: took 4.775117155s for fixHost
	I1120 21:46:17.792985  893814 start.go:83] releasing machines lock for "ha-409851", held for 4.775188491s
	I1120 21:46:17.793094  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:17.811172  893814 ssh_runner.go:195] Run: cat /version.json
	I1120 21:46:17.811227  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.811496  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:46:17.811569  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.830577  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.847514  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:18.032855  893814 ssh_runner.go:195] Run: systemctl --version
	I1120 21:46:18.039676  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:46:18.084631  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:46:18.089315  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:46:18.089397  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:46:18.097880  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:46:18.097906  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:46:18.097957  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:46:18.098046  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:46:18.113581  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:46:18.127110  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:46:18.127198  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:46:18.143327  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:46:18.156859  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:46:18.285846  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:46:18.406177  893814 docker.go:234] disabling docker service ...
	I1120 21:46:18.406303  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:46:18.422621  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:46:18.436488  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:46:18.557150  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:46:18.669376  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:46:18.683020  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:46:18.696701  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:46:18.696805  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.705450  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:46:18.705544  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.714727  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.724078  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.733001  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:46:18.741246  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.750057  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.758559  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.767154  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:46:18.774675  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:46:18.782542  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:18.908183  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:46:19.102647  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:46:19.102768  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:46:19.107633  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:46:19.107713  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:46:19.112020  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:46:19.139825  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:46:19.139929  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:46:19.171276  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:46:19.211415  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:46:19.214291  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:46:19.231738  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:46:19.235755  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:46:19.246147  893814 kubeadm.go:884] updating cluster {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:46:19.246304  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:19.246367  893814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:46:19.290538  893814 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:46:19.290565  893814 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:46:19.290626  893814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:46:19.316155  893814 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:46:19.316180  893814 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:46:19.316189  893814 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 21:46:19.316303  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:46:19.316387  893814 ssh_runner.go:195] Run: crio config
	I1120 21:46:19.371279  893814 cni.go:84] Creating CNI manager for ""
	I1120 21:46:19.371300  893814 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1120 21:46:19.371316  893814 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:46:19.371339  893814 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-409851 NodeName:ha-409851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:46:19.371462  893814 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-409851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:46:19.371484  893814 kube-vip.go:115] generating kube-vip config ...
	I1120 21:46:19.371537  893814 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:46:19.384116  893814 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:46:19.384238  893814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:46:19.384326  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:46:19.392356  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:46:19.392430  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 21:46:19.400069  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 21:46:19.413705  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:46:19.427554  893814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1120 21:46:19.440926  893814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:46:19.454200  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:46:19.457772  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:46:19.467840  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:19.582412  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:46:19.599710  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.2
	I1120 21:46:19.599791  893814 certs.go:195] generating shared ca certs ...
	I1120 21:46:19.599822  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.599996  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:46:19.600074  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:46:19.600106  893814 certs.go:257] generating profile certs ...
	I1120 21:46:19.600223  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:46:19.600276  893814 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee
	I1120 21:46:19.600310  893814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1120 21:46:19.750831  893814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee ...
	I1120 21:46:19.750905  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee: {Name:mk539a3dda8a36b48c6c5c30b7491f9043b065a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.751146  893814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee ...
	I1120 21:46:19.751277  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee: {Name:mk851c2f98f193e8bb483e43db8a657c69eae8b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.751416  893814 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt
	I1120 21:46:19.751615  893814 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key
	I1120 21:46:19.751796  893814 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:46:19.751838  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:46:19.751886  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:46:19.751918  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:46:19.751961  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:46:19.751995  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:46:19.752027  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:46:19.752070  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:46:19.752104  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:46:19.752174  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:46:19.752242  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:46:19.752268  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:46:19.752317  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:46:19.752367  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:46:19.752427  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:46:19.752538  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:19.752606  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:46:19.752639  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:46:19.752686  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:19.753263  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:46:19.782536  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:46:19.807080  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:46:19.842006  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:46:19.863690  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:46:19.882351  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:46:19.902131  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:46:19.923247  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:46:19.943308  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:46:19.961281  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:46:19.981823  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:46:19.999815  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:46:20.019398  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:46:20.026511  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.035530  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:46:20.043827  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.048146  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.048252  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.090685  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:46:20.099210  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.107103  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:46:20.115263  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.119310  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.119405  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.160958  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:46:20.168922  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.176806  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:46:20.184554  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.188641  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.188742  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.232577  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:46:20.246815  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:46:20.252000  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:46:20.307993  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:46:20.361067  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:46:20.404267  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:46:20.471141  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:46:20.556774  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:46:20.620581  893814 kubeadm.go:401] StartCluster: {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:20.620772  893814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:46:20.620872  893814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:46:20.672595  893814 cri.go:89] found id: "e758e4601a79aacd9dd015c82692281d156d9100d6bc2fb480b11d07ff223294"
	I1120 21:46:20.672675  893814 cri.go:89] found id: "bf7fd293f188a4c3116512ca8739e3ae57f6b6ac6e8e5e7a7e493804caba0ede"
	I1120 21:46:20.672702  893814 cri.go:89] found id: "29879cb03dd0a43326e4e6e94a9bec4cf49f8356cb3cf208c0a562ed783bb2de"
	I1120 21:46:20.672723  893814 cri.go:89] found id: "d2a9e01261d927422239ac6d8aae4c4810c85777bd6fc37ddc5126a51deff4dd"
	I1120 21:46:20.672755  893814 cri.go:89] found id: "538778f2e99f0831684f744a21c231b476e72c223d7af53829698631c58b4b38"
	I1120 21:46:20.672779  893814 cri.go:89] found id: ""
	I1120 21:46:20.672864  893814 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:46:20.692788  893814 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:46:20Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:46:20.692935  893814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:46:20.704191  893814 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:46:20.704251  893814 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:46:20.704341  893814 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:46:20.715485  893814 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:46:20.716011  893814 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-409851" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:20.716179  893814 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "ha-409851" cluster setting kubeconfig missing "ha-409851" context setting]
	I1120 21:46:20.716543  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.717160  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:46:20.717985  893814 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 21:46:20.718059  893814 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 21:46:20.718131  893814 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 21:46:20.718157  893814 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 21:46:20.718177  893814 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 21:46:20.718212  893814 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 21:46:20.730102  893814 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:46:20.744141  893814 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 21:46:20.744165  893814 kubeadm.go:602] duration metric: took 39.885836ms to restartPrimaryControlPlane
	I1120 21:46:20.744174  893814 kubeadm.go:403] duration metric: took 123.603025ms to StartCluster
	I1120 21:46:20.744191  893814 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.744256  893814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:20.744888  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.745066  893814 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:46:20.745084  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:46:20.745100  893814 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:46:20.745725  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:20.751118  893814 out.go:179] * Enabled addons: 
	I1120 21:46:20.754039  893814 addons.go:515] duration metric: took 8.930638ms for enable addons: enabled=[]
	I1120 21:46:20.754080  893814 start.go:247] waiting for cluster config update ...
	I1120 21:46:20.754090  893814 start.go:256] writing updated cluster config ...
	I1120 21:46:20.757337  893814 out.go:203] 
	I1120 21:46:20.760537  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:20.760717  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:20.764214  893814 out.go:179] * Starting "ha-409851-m02" control-plane node in "ha-409851" cluster
	I1120 21:46:20.767355  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:46:20.770446  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:46:20.773470  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:20.773563  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:46:20.773537  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:46:20.773902  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:46:20.773939  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:46:20.774117  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:20.801641  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:46:20.801660  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:46:20.801671  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:46:20.801698  893814 start.go:360] acquireMachinesLock for ha-409851-m02: {Name:mka809540f7c511f76e83dac3b1218011243fbec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:46:20.801748  893814 start.go:364] duration metric: took 35.446µs to acquireMachinesLock for "ha-409851-m02"
	I1120 21:46:20.801767  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:46:20.801774  893814 fix.go:54] fixHost starting: m02
	I1120 21:46:20.802025  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:46:20.830914  893814 fix.go:112] recreateIfNeeded on ha-409851-m02: state=Stopped err=<nil>
	W1120 21:46:20.830963  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:46:20.835462  893814 out.go:252] * Restarting existing docker container for "ha-409851-m02" ...
	I1120 21:46:20.835556  893814 cli_runner.go:164] Run: docker start ha-409851-m02
	I1120 21:46:21.218686  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:46:21.252602  893814 kic.go:430] container "ha-409851-m02" state is running.
	I1120 21:46:21.252990  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:21.287738  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:21.288165  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:46:21.288242  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:21.321625  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:21.321986  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:21.322003  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:46:21.324132  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50986->127.0.0.1:33942: read: connection reset by peer
	I1120 21:46:24.541429  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:46:24.541464  893814 ubuntu.go:182] provisioning hostname "ha-409851-m02"
	I1120 21:46:24.541536  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:24.591123  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:24.591436  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:24.591454  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m02 && echo "ha-409851-m02" | sudo tee /etc/hostname
	I1120 21:46:24.829670  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:46:24.830508  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:24.868680  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:24.868993  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:24.869016  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:46:25.086415  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:46:25.086446  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:46:25.086467  893814 ubuntu.go:190] setting up certificates
	I1120 21:46:25.086477  893814 provision.go:84] configureAuth start
	I1120 21:46:25.086545  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:25.116440  893814 provision.go:143] copyHostCerts
	I1120 21:46:25.116492  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:25.116528  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:46:25.116540  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:25.116614  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:46:25.116704  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:25.116727  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:46:25.116737  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:25.116766  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:46:25.116814  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:25.116842  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:46:25.116852  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:25.116880  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:46:25.116934  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m02 san=[127.0.0.1 192.168.49.3 ha-409851-m02 localhost minikube]
	I1120 21:46:25.299085  893814 provision.go:177] copyRemoteCerts
	I1120 21:46:25.299152  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:46:25.299205  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:25.334304  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:25.454142  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:46:25.454207  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:46:25.519452  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:46:25.519523  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:46:25.579807  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:46:25.579872  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:46:25.625625  893814 provision.go:87] duration metric: took 539.133654ms to configureAuth
	I1120 21:46:25.625654  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:46:25.625881  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:25.626005  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:25.676739  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:25.677055  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:25.677078  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:46:27.313592  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:46:27.313611  893814 machine.go:97] duration metric: took 6.025425517s to provisionDockerMachine
	I1120 21:46:27.313622  893814 start.go:293] postStartSetup for "ha-409851-m02" (driver="docker")
	I1120 21:46:27.313633  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:46:27.313709  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:46:27.313760  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.348890  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.472301  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:46:27.476588  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:46:27.476614  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:46:27.476626  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:46:27.476683  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:46:27.476757  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:46:27.476765  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:46:27.476876  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:46:27.485018  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:27.504498  893814 start.go:296] duration metric: took 190.860481ms for postStartSetup
	I1120 21:46:27.504660  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:46:27.504741  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.528788  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.644723  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:46:27.649843  893814 fix.go:56] duration metric: took 6.84806345s for fixHost
	I1120 21:46:27.649868  893814 start.go:83] releasing machines lock for "ha-409851-m02", held for 6.848112263s
	I1120 21:46:27.649945  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:27.674188  893814 out.go:179] * Found network options:
	I1120 21:46:27.677242  893814 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 21:46:27.680124  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:46:27.680168  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:46:27.680244  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:46:27.680247  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:46:27.680288  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.680307  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.700610  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.707137  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.925105  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:46:28.059572  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:46:28.059657  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:46:28.074369  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:46:28.074399  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:46:28.074432  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:46:28.074499  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:46:28.097384  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:46:28.115088  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:46:28.115159  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:46:28.145681  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:46:28.169842  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:46:28.395806  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:46:28.633186  893814 docker.go:234] disabling docker service ...
	I1120 21:46:28.633295  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:46:28.653639  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:46:28.673051  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:46:28.911134  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:46:29.139790  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:46:29.165309  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:46:29.189385  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:46:29.189499  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.203577  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:46:29.203723  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.219781  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.229964  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.247451  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:46:29.257774  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.270135  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.279629  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.289968  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:46:29.299527  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:46:29.308385  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:29.625535  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:47:59.900415  893814 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.274799929s)
	I1120 21:47:59.900439  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:47:59.900493  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:47:59.904340  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:47:59.904408  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:47:59.908141  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:47:59.934786  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:47:59.934878  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:47:59.970641  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:00.031101  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:48:00.052822  893814 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:48:00.070551  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:48:00.144325  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:48:00.158851  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:00.193319  893814 mustload.go:66] Loading cluster: ha-409851
	I1120 21:48:00.193638  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:00.193952  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:48:00.257208  893814 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:48:00.257542  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.3
	I1120 21:48:00.257559  893814 certs.go:195] generating shared ca certs ...
	I1120 21:48:00.257575  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:48:00.257700  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:48:00.257744  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:48:00.257751  893814 certs.go:257] generating profile certs ...
	I1120 21:48:00.257839  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:48:00.257904  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.e3c52656
	I1120 21:48:00.257941  893814 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:48:00.257951  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:48:00.257964  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:48:00.257975  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:48:00.257985  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:48:00.257997  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:48:00.258009  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:48:00.258021  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:48:00.258032  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:48:00.258087  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:48:00.258118  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:48:00.258141  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:48:00.258171  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:48:00.258206  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:48:00.258229  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:48:00.258276  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:00.258311  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.258325  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.258342  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:00.258416  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:48:00.286658  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:48:00.411419  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:48:00.416825  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:48:00.429106  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:48:00.434141  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:48:00.446859  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:48:00.451932  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:48:00.463743  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:48:00.468370  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:48:00.478967  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:48:00.483728  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:48:00.495516  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:48:00.499782  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:48:00.510022  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:48:00.533411  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:48:00.557609  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:48:00.579641  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:48:00.599346  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:48:00.622831  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:48:00.643496  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:48:00.662349  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:48:00.681048  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:48:00.700389  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:48:00.721204  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:48:00.741591  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:48:00.755291  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:48:00.769986  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:48:00.784853  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:48:00.798923  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:48:00.812361  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:48:00.826911  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:48:00.842313  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:48:00.849394  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.857032  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:48:00.864532  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.868398  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.868472  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.910592  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:48:00.918458  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.926263  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:48:00.934304  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.938442  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.938531  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.987101  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:48:00.995288  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.003879  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:48:01.012703  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.016823  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.016924  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.059233  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:48:01.068459  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:48:01.072670  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:48:01.115135  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:48:01.157870  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:48:01.200156  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:48:01.244244  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:48:01.286456  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:48:01.333479  893814 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 21:48:01.333592  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:48:01.333632  893814 kube-vip.go:115] generating kube-vip config ...
	I1120 21:48:01.333685  893814 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:48:01.347658  893814 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:48:01.347774  893814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:48:01.347874  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:48:01.355891  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:48:01.355970  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:48:01.364043  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:48:01.379594  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:48:01.393213  893814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:48:01.408709  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:48:01.412906  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:01.423617  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:01.551671  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:01.569302  893814 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:48:01.569783  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:01.575430  893814 out.go:179] * Verifying Kubernetes components...
	I1120 21:48:01.578446  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:01.722511  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:01.736860  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:48:01.736934  893814 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:48:01.737186  893814 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:48:04.960847  893814 node_ready.go:49] node "ha-409851-m02" is "Ready"
	I1120 21:48:04.960925  893814 node_ready.go:38] duration metric: took 3.223709398s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:48:04.960953  893814 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:48:04.961033  893814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:48:05.021304  893814 api_server.go:72] duration metric: took 3.451906522s to wait for apiserver process to appear ...
	I1120 21:48:05.021328  893814 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:48:05.021347  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:05.086025  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:48:05.086102  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:48:05.521475  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:05.533319  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:05.533405  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:06.022053  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:06.033112  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:06.033164  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:06.521455  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:06.532108  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:06.532149  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:07.021472  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:07.033567  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:07.033607  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:07.522248  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:07.530734  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:07.530766  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:08.021549  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:08.030067  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:08.030107  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:08.521458  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:08.536690  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:08.536723  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:09.022442  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:09.030694  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:09.030720  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:09.522023  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:09.532358  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:09.532394  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:10.022104  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:10.033572  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:10.033669  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:10.521893  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:10.530183  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:10.530209  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:11.022029  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:11.030471  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:11.030511  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:11.522184  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:11.530808  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:11.530915  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:12.021498  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:12.034571  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:48:12.037300  893814 api_server.go:141] control plane version: v1.34.1
	I1120 21:48:12.037383  893814 api_server.go:131] duration metric: took 7.016046235s to wait for apiserver health ...
	I1120 21:48:12.037406  893814 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:48:12.048906  893814 system_pods.go:59] 26 kube-system pods found
	I1120 21:48:12.049004  893814 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.049030  893814 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.049050  893814 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:48:12.049082  893814 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:48:12.049108  893814 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:48:12.049158  893814 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:48:12.049176  893814 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:48:12.049198  893814 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:48:12.049233  893814 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:48:12.049257  893814 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:48:12.049279  893814 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:48:12.049316  893814 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:48:12.049340  893814 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.049361  893814 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:48:12.049397  893814 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.049417  893814 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:48:12.049437  893814 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:48:12.049467  893814 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:48:12.049494  893814 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:48:12.049514  893814 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:48:12.049534  893814 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:48:12.049569  893814 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:48:12.049590  893814 system_pods.go:61] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:48:12.049611  893814 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:48:12.049637  893814 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:48:12.049658  893814 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:48:12.049682  893814 system_pods.go:74] duration metric: took 12.253231ms to wait for pod list to return data ...
	I1120 21:48:12.049715  893814 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:48:12.054143  893814 default_sa.go:45] found service account: "default"
	I1120 21:48:12.054233  893814 default_sa.go:55] duration metric: took 4.491625ms for default service account to be created ...
	I1120 21:48:12.054260  893814 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:48:12.060879  893814 system_pods.go:86] 26 kube-system pods found
	I1120 21:48:12.060981  893814 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.061047  893814 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.061081  893814 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:48:12.061118  893814 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:48:12.061152  893814 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:48:12.061181  893814 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:48:12.061223  893814 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:48:12.061271  893814 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:48:12.061294  893814 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:48:12.061323  893814 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:48:12.061400  893814 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:48:12.061442  893814 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:48:12.061465  893814 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.061496  893814 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:48:12.061529  893814 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.061551  893814 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:48:12.061574  893814 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:48:12.061605  893814 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:48:12.061634  893814 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:48:12.061656  893814 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:48:12.061691  893814 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:48:12.061711  893814 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:48:12.061741  893814 system_pods.go:89] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:48:12.061774  893814 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:48:12.061808  893814 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:48:12.061865  893814 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:48:12.061888  893814 system_pods.go:126] duration metric: took 7.607421ms to wait for k8s-apps to be running ...
	I1120 21:48:12.061910  893814 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:48:12.062033  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:48:12.076739  893814 system_svc.go:56] duration metric: took 14.81844ms WaitForService to wait for kubelet
	I1120 21:48:12.076837  893814 kubeadm.go:587] duration metric: took 10.507445578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:48:12.076873  893814 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:48:12.086832  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.086926  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.086951  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.086971  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.087052  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.087072  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.087105  893814 node_conditions.go:105] duration metric: took 10.20235ms to run NodePressure ...
	I1120 21:48:12.087136  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:48:12.087208  893814 start.go:256] writing updated cluster config ...
	I1120 21:48:12.090921  893814 out.go:203] 
	I1120 21:48:12.094218  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:12.094393  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.097669  893814 out.go:179] * Starting "ha-409851-m04" worker node in "ha-409851" cluster
	I1120 21:48:12.101322  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:48:12.106565  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:48:12.109717  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:48:12.109827  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:48:12.109799  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:48:12.110177  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:48:12.110212  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:48:12.110403  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.132566  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:48:12.132590  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:48:12.132610  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:48:12.132636  893814 start.go:360] acquireMachinesLock for ha-409851-m04: {Name:mk87280fc97adfe0461a2851d285457d7b179a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:48:12.132693  893814 start.go:364] duration metric: took 36.636µs to acquireMachinesLock for "ha-409851-m04"
	I1120 21:48:12.132719  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:48:12.132728  893814 fix.go:54] fixHost starting: m04
	I1120 21:48:12.132989  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:48:12.154532  893814 fix.go:112] recreateIfNeeded on ha-409851-m04: state=Stopped err=<nil>
	W1120 21:48:12.154570  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:48:12.157790  893814 out.go:252] * Restarting existing docker container for "ha-409851-m04" ...
	I1120 21:48:12.157940  893814 cli_runner.go:164] Run: docker start ha-409851-m04
	I1120 21:48:12.427421  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:48:12.449849  893814 kic.go:430] container "ha-409851-m04" state is running.
	I1120 21:48:12.450339  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:12.476563  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.476804  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:48:12.476866  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:12.503516  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:12.503831  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:12.503851  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:48:12.506827  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:48:15.671577  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:48:15.671648  893814 ubuntu.go:182] provisioning hostname "ha-409851-m04"
	I1120 21:48:15.671727  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:15.694098  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:15.694405  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:15.694422  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m04 && echo "ha-409851-m04" | sudo tee /etc/hostname
	I1120 21:48:15.858000  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:48:15.858085  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:15.876926  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:15.877279  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:15.877303  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:48:16.029401  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:48:16.029428  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:48:16.029445  893814 ubuntu.go:190] setting up certificates
	I1120 21:48:16.029456  893814 provision.go:84] configureAuth start
	I1120 21:48:16.029533  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:16.048090  893814 provision.go:143] copyHostCerts
	I1120 21:48:16.048141  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:48:16.048175  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:48:16.048187  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:48:16.048261  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:48:16.048383  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:48:16.048401  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:48:16.048406  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:48:16.048432  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:48:16.048499  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:48:16.048515  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:48:16.048520  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:48:16.048545  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:48:16.048600  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m04 san=[127.0.0.1 192.168.49.5 ha-409851-m04 localhost minikube]
	I1120 21:48:16.265083  893814 provision.go:177] copyRemoteCerts
	I1120 21:48:16.265160  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:48:16.265209  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.290442  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:16.396414  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:48:16.396484  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:48:16.418369  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:48:16.418439  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:48:16.437910  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:48:16.437992  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:48:16.456712  893814 provision.go:87] duration metric: took 427.242108ms to configureAuth
	I1120 21:48:16.456739  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:48:16.457027  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:16.457179  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.476563  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:16.477370  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:16.477578  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:48:16.833311  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:48:16.833334  893814 machine.go:97] duration metric: took 4.356521136s to provisionDockerMachine
	I1120 21:48:16.833346  893814 start.go:293] postStartSetup for "ha-409851-m04" (driver="docker")
	I1120 21:48:16.833356  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:48:16.833422  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:48:16.833480  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.855465  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:16.967534  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:48:16.970900  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:48:16.970931  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:48:16.970942  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:48:16.971037  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:48:16.971121  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:48:16.971132  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:48:16.971248  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:48:16.980647  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:17.001479  893814 start.go:296] duration metric: took 168.114968ms for postStartSetup
	I1120 21:48:17.001571  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:48:17.001627  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.030384  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.140073  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:48:17.144863  893814 fix.go:56] duration metric: took 5.012127885s for fixHost
	I1120 21:48:17.144890  893814 start.go:83] releasing machines lock for "ha-409851-m04", held for 5.012183123s
	I1120 21:48:17.144964  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:17.172547  893814 out.go:179] * Found network options:
	I1120 21:48:17.175556  893814 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 21:48:17.178404  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178431  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178457  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178669  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:48:17.178737  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:48:17.178785  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.178630  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:48:17.178897  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.197245  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.203292  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.340122  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:48:17.405989  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:48:17.406071  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:48:17.414439  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:48:17.414465  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:48:17.414498  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:48:17.414553  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:48:17.430500  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:48:17.443843  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:48:17.443906  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:48:17.460231  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:48:17.475600  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:48:17.602698  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:48:17.729597  893814 docker.go:234] disabling docker service ...
	I1120 21:48:17.729663  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:48:17.746588  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:48:17.760617  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:48:17.897973  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:48:18.030520  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:48:18.046315  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:48:18.066053  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:48:18.066129  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.077050  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:48:18.077175  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.090079  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.100829  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.110671  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:48:18.121922  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.135640  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.145103  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.155094  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:48:18.164129  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:48:18.171842  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:18.297944  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:48:18.470275  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:48:18.470358  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:48:18.479108  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:48:18.479175  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:48:18.483098  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:48:18.507764  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:48:18.507924  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:18.539112  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:18.574786  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:48:18.577738  893814 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:48:18.580677  893814 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:48:18.583863  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:48:18.602824  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:48:18.606736  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:18.616366  893814 mustload.go:66] Loading cluster: ha-409851
	I1120 21:48:18.616605  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:18.616854  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:48:18.635714  893814 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:48:18.635989  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.5
	I1120 21:48:18.636005  893814 certs.go:195] generating shared ca certs ...
	I1120 21:48:18.636021  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:48:18.636154  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:48:18.636201  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:48:18.636216  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:48:18.636245  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:48:18.636262  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:48:18.636274  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:48:18.636332  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:48:18.636367  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:48:18.636380  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:48:18.636406  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:48:18.636432  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:48:18.636458  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:48:18.636503  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:18.636535  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.636553  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.636564  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.636585  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:48:18.657556  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:48:18.675080  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:48:18.694571  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:48:18.716226  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:48:18.739895  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:48:18.768046  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:48:18.787993  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:48:18.794810  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.802541  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:48:18.810498  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.814300  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.814368  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.856630  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:48:18.864919  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.872737  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:48:18.880590  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.884848  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.884916  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.931413  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:48:18.939099  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.946583  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:48:18.954298  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.960087  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.960197  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:48:19.002435  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:48:19.012167  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:48:19.016432  893814 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:48:19.016483  893814 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 21:48:19.016573  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:48:19.016654  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:48:19.026160  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:48:19.026286  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 21:48:19.036127  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:48:19.049708  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:48:19.064947  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:48:19.068918  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:19.079069  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:19.199728  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:19.213792  893814 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 21:48:19.214167  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:19.219019  893814 out.go:179] * Verifying Kubernetes components...
	I1120 21:48:19.221920  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:19.355490  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:19.371278  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:48:19.371349  893814 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:48:19.371586  893814 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m04" to be "Ready" ...
	I1120 21:48:19.374629  893814 node_ready.go:49] node "ha-409851-m04" is "Ready"
	I1120 21:48:19.374657  893814 node_ready.go:38] duration metric: took 3.053659ms for node "ha-409851-m04" to be "Ready" ...
	I1120 21:48:19.374671  893814 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:48:19.374745  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:48:19.389451  893814 system_svc.go:56] duration metric: took 14.77112ms WaitForService to wait for kubelet
	I1120 21:48:19.389479  893814 kubeadm.go:587] duration metric: took 175.627603ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:48:19.389497  893814 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:48:19.393426  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393518  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393535  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393542  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393547  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393552  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393557  893814 node_conditions.go:105] duration metric: took 4.054434ms to run NodePressure ...
	I1120 21:48:19.393575  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:48:19.393603  893814 start.go:256] writing updated cluster config ...
	I1120 21:48:19.393953  893814 ssh_runner.go:195] Run: rm -f paused
	I1120 21:48:19.397987  893814 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:48:19.398502  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:48:19.416487  893814 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:48:21.424537  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:23.929996  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:26.423923  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:28.424118  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:30.923501  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:33.423121  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:35.423365  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:37.424719  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:39.923727  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:41.965360  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:44.435238  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:46.923403  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:48.923993  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:51.426397  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:53.924562  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:56.423976  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:58.431436  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:00.922387  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:02.923880  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:04.924121  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:07.423527  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:09.424675  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:11.922381  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:13.922686  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:15.923609  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:17.924006  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:20.423097  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:22.423996  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	I1120 21:49:23.424030  893814 pod_ready.go:94] pod "coredns-66bc5c9577-pjk6c" is "Ready"
	I1120 21:49:23.424063  893814 pod_ready.go:86] duration metric: took 1m4.007542805s for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.424073  893814 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.430119  893814 pod_ready.go:94] pod "coredns-66bc5c9577-vfsp6" is "Ready"
	I1120 21:49:23.430146  893814 pod_ready.go:86] duration metric: took 6.066348ms for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.434497  893814 pod_ready.go:83] waiting for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.442021  893814 pod_ready.go:94] pod "etcd-ha-409851" is "Ready"
	I1120 21:49:23.442059  893814 pod_ready.go:86] duration metric: took 7.532597ms for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.442070  893814 pod_ready.go:83] waiting for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.453471  893814 pod_ready.go:94] pod "etcd-ha-409851-m02" is "Ready"
	I1120 21:49:23.453510  893814 pod_ready.go:86] duration metric: took 11.432528ms for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.460522  893814 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.617970  893814 request.go:683] "Waited before sending request" delay="157.293328ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-409851"
	I1120 21:49:23.817544  893814 request.go:683] "Waited before sending request" delay="194.243021ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:23.820786  893814 pod_ready.go:94] pod "kube-apiserver-ha-409851" is "Ready"
	I1120 21:49:23.820814  893814 pod_ready.go:86] duration metric: took 360.266065ms for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.820823  893814 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.018232  893814 request.go:683] "Waited before sending request" delay="197.334029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-409851-m02"
	I1120 21:49:24.217808  893814 request.go:683] "Waited before sending request" delay="195.31208ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:24.220981  893814 pod_ready.go:94] pod "kube-apiserver-ha-409851-m02" is "Ready"
	I1120 21:49:24.221009  893814 pod_ready.go:86] duration metric: took 400.178739ms for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.418386  893814 request.go:683] "Waited before sending request" delay="197.22929ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 21:49:24.423065  893814 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.617542  893814 request.go:683] "Waited before sending request" delay="194.266332ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851"
	I1120 21:49:24.818451  893814 request.go:683] "Waited before sending request" delay="195.369435ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:24.821748  893814 pod_ready.go:94] pod "kube-controller-manager-ha-409851" is "Ready"
	I1120 21:49:24.821777  893814 pod_ready.go:86] duration metric: took 398.632324ms for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.821787  893814 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.018152  893814 request.go:683] "Waited before sending request" delay="196.257511ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m02"
	I1120 21:49:25.217440  893814 request.go:683] "Waited before sending request" delay="193.274434ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:25.221099  893814 pod_ready.go:94] pod "kube-controller-manager-ha-409851-m02" is "Ready"
	I1120 21:49:25.221184  893814 pod_ready.go:86] duration metric: took 399.388707ms for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.417592  893814 request.go:683] "Waited before sending request" delay="196.294697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1120 21:49:25.421901  893814 pod_ready.go:83] waiting for pod "kube-proxy-4qqxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.618261  893814 request.go:683] "Waited before sending request" delay="196.198417ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qqxh"
	I1120 21:49:25.818227  893814 request.go:683] "Waited before sending request" delay="195.266861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:25.822845  893814 pod_ready.go:94] pod "kube-proxy-4qqxh" is "Ready"
	I1120 21:49:25.822876  893814 pod_ready.go:86] duration metric: took 400.891774ms for pod "kube-proxy-4qqxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.822887  893814 pod_ready.go:83] waiting for pod "kube-proxy-pz7vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.018147  893814 request.go:683] "Waited before sending request" delay="195.181839ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz7vt"
	I1120 21:49:26.218218  893814 request.go:683] "Waited before sending request" delay="194.325204ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:26.221718  893814 pod_ready.go:94] pod "kube-proxy-pz7vt" is "Ready"
	I1120 21:49:26.221756  893814 pod_ready.go:86] duration metric: took 398.861103ms for pod "kube-proxy-pz7vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.221767  893814 pod_ready.go:83] waiting for pod "kube-proxy-xnhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.418209  893814 request.go:683] "Waited before sending request" delay="196.333755ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhl6"
	I1120 21:49:26.618151  893814 request.go:683] "Waited before sending request" delay="196.349344ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m04"
	I1120 21:49:26.623181  893814 pod_ready.go:94] pod "kube-proxy-xnhl6" is "Ready"
	I1120 21:49:26.623210  893814 pod_ready.go:86] duration metric: took 401.436889ms for pod "kube-proxy-xnhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.817459  893814 request.go:683] "Waited before sending request" delay="194.131676ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1120 21:49:26.821013  893814 pod_ready.go:83] waiting for pod "kube-scheduler-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.018492  893814 request.go:683] "Waited before sending request" delay="197.322386ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851"
	I1120 21:49:27.217513  893814 request.go:683] "Waited before sending request" delay="190.181719ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:27.226443  893814 pod_ready.go:94] pod "kube-scheduler-ha-409851" is "Ready"
	I1120 21:49:27.226520  893814 pod_ready.go:86] duration metric: took 405.47524ms for pod "kube-scheduler-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.226546  893814 pod_ready.go:83] waiting for pod "kube-scheduler-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.417983  893814 request.go:683] "Waited before sending request" delay="191.325659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851-m02"
	I1120 21:49:27.618140  893814 request.go:683] "Waited before sending request" delay="196.249535ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:27.817620  893814 request.go:683] "Waited before sending request" delay="90.393989ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851-m02"
	I1120 21:49:28.018196  893814 request.go:683] "Waited before sending request" delay="197.189707ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:28.417767  893814 request.go:683] "Waited before sending request" delay="186.33455ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:28.817959  893814 request.go:683] "Waited before sending request" delay="87.275796ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	W1120 21:49:29.233343  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:31.233779  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:33.234413  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:35.733284  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:38.233049  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:40.233361  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:42.235442  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:44.734815  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:47.232729  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:49.233113  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:51.234068  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:53.732962  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:56.233319  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:58.734472  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:01.234009  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:03.234832  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:05.733469  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:08.234179  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:10.735546  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:12.735872  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:14.736374  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:16.740445  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:19.233806  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:21.733741  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:23.735456  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:26.232453  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:28.233317  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:30.735024  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:32.735868  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:35.234232  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:37.734207  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:40.234052  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:42.240134  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:44.733059  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:46.733334  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:48.738389  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:51.233067  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:53.234660  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:55.733852  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:57.734484  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:00.249903  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:02.732606  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:04.736105  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:07.233350  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:09.733211  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:11.733392  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:14.234536  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:16.732259  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:18.735892  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:20.735996  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:23.234680  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:25.733375  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:27.733961  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:29.735523  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:32.236382  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:34.733336  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:36.733744  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:38.734442  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:40.734588  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:42.734796  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:44.735137  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:46.736111  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:49.233632  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:51.733070  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:53.734822  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:56.233800  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:58.234379  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:00.264529  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:02.742360  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:05.233819  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:07.733077  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:09.734867  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:12.233625  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:14.733387  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:16.734342  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:18.734797  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	I1120 21:52:19.398473  893814 pod_ready.go:86] duration metric: took 2m52.171896252s for pod "kube-scheduler-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:52:19.398508  893814 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 21:52:19.398524  893814 pod_ready.go:40] duration metric: took 4m0.000499103s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:52:19.401528  893814 out.go:203] 
	W1120 21:52:19.404511  893814 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 21:52:19.407414  893814 out.go:203] 
	
	
	==> CRI-O <==
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.811470727Z" level=info msg="Running pod sandbox: kube-system/kindnet-7hmbf/POD" id=28bea4ad-45c7-4ae7-92e7-809ca92ae1f4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.811536598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.815250925Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=28bea4ad-45c7-4ae7-92e7-809ca92ae1f4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.818484951Z" level=info msg="Ran pod sandbox b2d79927049c127d9e5f12aca58d594c8f613b055eb5c07f7c0ebe2467920bdb with infra container: kube-system/kindnet-7hmbf/POD" id=28bea4ad-45c7-4ae7-92e7-809ca92ae1f4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.820409438Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=de13f0e7-3c4a-42d5-9c8d-3a3bc426d7fd name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.826704318Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f2666544-b5e7-4f59-a2f3-144082db7373 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.828605429Z" level=info msg="Creating container: kube-system/kindnet-7hmbf/kindnet-cni" id=fa91b507-57b0-4587-9812-2928e0280a62 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.829288957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.834469699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.835169227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.85382609Z" level=info msg="Created container bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454: kube-system/kindnet-7hmbf/kindnet-cni" id=fa91b507-57b0-4587-9812-2928e0280a62 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.854825659Z" level=info msg="Starting container: bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454" id=c468e3c9-d4e5-493c-bfd8-7edc351197ab name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.859192598Z" level=info msg="Started container" PID=1405 containerID=bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454 description=kube-system/kindnet-7hmbf/kindnet-cni id=c468e3c9-d4e5-493c-bfd8-7edc351197ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=b2d79927049c127d9e5f12aca58d594c8f613b055eb5c07f7c0ebe2467920bdb
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.206856782Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.210460298Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.21049604Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.210517833Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.213977617Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.214129201Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.214171162Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.217329445Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.217362923Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.217385791Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.220578314Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.220610922Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	bad91fe692656       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 minutes ago       Running             kindnet-cni               2                   b2d79927049c1       kindnet-7hmbf                       kube-system
	45150399abc60       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   3 minutes ago       Running             busybox                   2                   86a0aabe892ba       busybox-7b57f96db7-mgvhj            default
	282f28167fcd8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       3                   cf9b9178a22be       storage-provisioner                 kube-system
	283abd913ff4d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 minutes ago       Running             kube-proxy                2                   51827a0562eaa       kube-proxy-4qqxh                    kube-system
	3064e4d2cac3e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   3 minutes ago       Running             coredns                   2                   f1efa47298912       coredns-66bc5c9577-pjk6c            kube-system
	474e5b9d1f070       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   3 minutes ago       Running             coredns                   2                   fb899ea594eab       coredns-66bc5c9577-vfsp6            kube-system
	5ccb03706c0f4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Running             kube-controller-manager   7                   5ac2d22e0c15f       kube-controller-manager-ha-409851   kube-system
	53d8cbac386fc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Exited              kube-controller-manager   6                   5ac2d22e0c15f       kube-controller-manager-ha-409851   kube-system
	21eb6c12eb9d6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   4 minutes ago       Running             kube-apiserver            4                   11a0f49f5bc02       kube-apiserver-ha-409851            kube-system
	e758e4601a79a       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Running             kube-vip                  2                   276d004d64a0f       kube-vip-ha-409851                  kube-system
	bf7fd293f188a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            2                   251d917d7ecb8       kube-scheduler-ha-409851            kube-system
	29879cb03dd0a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      2                   44edbb77d8632       etcd-ha-409851                      kube-system
	d2a9e01261d92       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            3                   11a0f49f5bc02       kube-apiserver-ha-409851            kube-system
	
	
	==> coredns [3064e4d2cac3e067a0a0ba1353e3b89a5da11e7e5a320f683346febeadfbb73a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40971 - 38824 "HINFO IN 3995400066811168115.5738602718581230250. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004050865s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [474e5b9d1f07007a252c22fb0e9172e8fd3235037aecc813a1d66128aa8e0d26] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46282 - 18255 "HINFO IN 2304188649282025477.3571330681415947141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021110391s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-409851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_32_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:32:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:52:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:51:49 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:51:49 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:51:49 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:51:49 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-409851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                1f114e92-c1bf-4c10-9121-0a6c185877b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mgvhj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-pjk6c             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 coredns-66bc5c9577-vfsp6             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-ha-409851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-7hmbf                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-409851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-409851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-4qqxh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-409851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-409851                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 3m41s                kube-proxy       
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 13m                  kube-proxy       
	  Normal   Starting                 20m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     20m (x8 over 20m)    kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)    kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)    kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   Starting                 19m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m                  kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                  kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m                  kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           19m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-409851 status is now: NodeReady
	  Normal   RegisteredNode           17m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Warning  CgroupV1                 13m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 13m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)    kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)    kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)    kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   NodeHasSufficientMemory  6m5s (x8 over 6m5s)  kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m5s (x8 over 6m5s)  kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m5s (x8 over 6m5s)  kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m13s                node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           3m40s                node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	
	
	Name:               ha-409851-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_33_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:52:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:51:21 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:51:21 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:51:21 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:51:21 +0000   Thu, 20 Nov 2025 21:34:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-409851-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3904cc8f-d8d1-4880-8dca-3fb5e1048dff
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hqh2f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-409851-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-56lr8                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-409851-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-409851-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-pz7vt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-409851-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-409851-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 13m                  kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   Starting                 3m41s                kube-proxy       
	  Normal   RegisteredNode           19m                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           19m                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   Starting                 15m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)    kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)    kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)    kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   Starting                 13m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)    kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)    kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)    kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   Starting                 6m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m2s (x8 over 6m2s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m2s (x8 over 6m2s)  kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m2s (x8 over 6m2s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m2s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m13s                node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           3m40s                node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	
	
	Name:               ha-409851-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_35_59_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:52:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:51:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:51:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:51:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:51:50 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-409851-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                2c1b4976-2a70-4f78-8646-ed9804d613b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-snllw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 kindnet-2d5r9               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-proxy-xnhl6            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 3m52s                 kube-proxy       
	  Normal   Starting                 16m                   kube-proxy       
	  Normal   Starting                 10m                   kube-proxy       
	  Warning  CgroupV1                 16m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m (x3 over 16m)     kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x3 over 16m)     kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x3 over 16m)     kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                   node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           16m                   node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           16m                   node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeReady                15m                   kubelet          Node ha-409851-m04 status is now: NodeReady
	  Normal   RegisteredNode           14m                   node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           13m                   node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           12m                   node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeNotReady             12m                   node-controller  Node ha-409851-m04 status is now: NodeNotReady
	  Normal   Starting                 11m                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)     kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)     kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)     kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m13s                 node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   Starting                 4m11s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m11s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m8s (x8 over 4m11s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m8s (x8 over 4m11s)  kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m8s (x8 over 4m11s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m40s                 node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	
	
	==> dmesg <==
	[Nov20 19:53] overlayfs: idmapped layers are currently not supported
	[  +2.035111] overlayfs: idmapped layers are currently not supported
	[Nov20 19:54] overlayfs: idmapped layers are currently not supported
	[Nov20 19:55] overlayfs: idmapped layers are currently not supported
	[Nov20 19:56] overlayfs: idmapped layers are currently not supported
	[Nov20 19:57] overlayfs: idmapped layers are currently not supported
	[Nov20 19:58] overlayfs: idmapped layers are currently not supported
	[Nov20 19:59] overlayfs: idmapped layers are currently not supported
	[Nov20 20:04] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:08] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:11] overlayfs: idmapped layers are currently not supported
	[Nov20 21:17] overlayfs: idmapped layers are currently not supported
	[Nov20 21:18] overlayfs: idmapped layers are currently not supported
	[Nov20 21:32] overlayfs: idmapped layers are currently not supported
	[Nov20 21:33] overlayfs: idmapped layers are currently not supported
	[Nov20 21:34] overlayfs: idmapped layers are currently not supported
	[Nov20 21:36] overlayfs: idmapped layers are currently not supported
	[Nov20 21:37] overlayfs: idmapped layers are currently not supported
	[Nov20 21:38] overlayfs: idmapped layers are currently not supported
	[  +3.034217] overlayfs: idmapped layers are currently not supported
	[Nov20 21:39] overlayfs: idmapped layers are currently not supported
	[Nov20 21:41] overlayfs: idmapped layers are currently not supported
	[Nov20 21:46] overlayfs: idmapped layers are currently not supported
	[  +2.922279] overlayfs: idmapped layers are currently not supported
	[Nov20 21:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [29879cb03dd0a43326e4e6e94a9bec4cf49f8356cb3cf208c0a562ed783bb2de] <==
	{"level":"info","ts":"2025-11-20T21:48:04.987262Z","caller":"traceutil/trace.go:172","msg":"trace[1675718777] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:3453; }","duration":"117.030077ms","start":"2025-11-20T21:48:04.870220Z","end":"2025-11-20T21:48:04.987250Z","steps":["trace[1675718777] 'agreement among raft nodes before linearized reading'  (duration: 108.221542ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:04.997350Z","caller":"traceutil/trace.go:172","msg":"trace[1770117129] range","detail":"{range_begin:/registry/servicecidrs; range_end:; response_count:0; response_revision:3453; }","duration":"121.253555ms","start":"2025-11-20T21:48:04.876071Z","end":"2025-11-20T21:48:04.997324Z","steps":["trace[1770117129] 'agreement among raft nodes before linearized reading'  (duration: 102.33561ms)","trace[1770117129] 'range keys from in-memory index tree'  (duration: 18.887036ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T21:48:05.098010Z","caller":"traceutil/trace.go:172","msg":"trace[2037113995] range","detail":"{range_begin:/registry/ingressclasses; range_end:; response_count:0; response_revision:3453; }","duration":"102.975273ms","start":"2025-11-20T21:48:04.995024Z","end":"2025-11-20T21:48:05.098000Z","steps":["trace[2037113995] 'agreement among raft nodes before linearized reading'  (duration: 102.942698ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100356Z","caller":"traceutil/trace.go:172","msg":"trace[162038184] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:3453; }","duration":"111.304394ms","start":"2025-11-20T21:48:04.989041Z","end":"2025-11-20T21:48:05.100345Z","steps":["trace[162038184] 'agreement among raft nodes before linearized reading'  (duration: 111.259043ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100414Z","caller":"traceutil/trace.go:172","msg":"trace[1479816564] range","detail":"{range_begin:/registry/deviceclasses/; range_end:/registry/deviceclasses0; response_count:0; response_revision:3453; }","duration":"122.163392ms","start":"2025-11-20T21:48:04.978245Z","end":"2025-11-20T21:48:05.100409Z","steps":["trace[1479816564] 'agreement among raft nodes before linearized reading'  (duration: 122.142174ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100443Z","caller":"traceutil/trace.go:172","msg":"trace[1071692997] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:3453; }","duration":"122.210957ms","start":"2025-11-20T21:48:04.978228Z","end":"2025-11-20T21:48:05.100439Z","steps":["trace[1071692997] 'agreement among raft nodes before linearized reading'  (duration: 122.195835ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100470Z","caller":"traceutil/trace.go:172","msg":"trace[321870719] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:3453; }","duration":"122.649806ms","start":"2025-11-20T21:48:04.977816Z","end":"2025-11-20T21:48:05.100466Z","steps":["trace[321870719] 'agreement among raft nodes before linearized reading'  (duration: 122.636702ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100504Z","caller":"traceutil/trace.go:172","msg":"trace[391658353] range","detail":"{range_begin:/registry/volumeattributesclasses/; range_end:/registry/volumeattributesclasses0; response_count:0; response_revision:3453; }","duration":"122.764745ms","start":"2025-11-20T21:48:04.977735Z","end":"2025-11-20T21:48:05.100500Z","steps":["trace[391658353] 'agreement among raft nodes before linearized reading'  (duration: 122.746931ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100571Z","caller":"traceutil/trace.go:172","msg":"trace[1747834679] range","detail":"{range_begin:compact_rev_key; range_end:; response_count:1; response_revision:3453; }","duration":"122.847642ms","start":"2025-11-20T21:48:04.977719Z","end":"2025-11-20T21:48:05.100567Z","steps":["trace[1747834679] 'agreement among raft nodes before linearized reading'  (duration: 122.792413ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100602Z","caller":"traceutil/trace.go:172","msg":"trace[994787852] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:3453; }","duration":"123.045857ms","start":"2025-11-20T21:48:04.977552Z","end":"2025-11-20T21:48:05.100598Z","steps":["trace[994787852] 'agreement among raft nodes before linearized reading'  (duration: 123.029184ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100652Z","caller":"traceutil/trace.go:172","msg":"trace[1075319704] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:3453; }","duration":"123.113213ms","start":"2025-11-20T21:48:04.977533Z","end":"2025-11-20T21:48:05.100646Z","steps":["trace[1075319704] 'agreement among raft nodes before linearized reading'  (duration: 123.079128ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100678Z","caller":"traceutil/trace.go:172","msg":"trace[1734896502] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:3453; }","duration":"123.161287ms","start":"2025-11-20T21:48:04.977513Z","end":"2025-11-20T21:48:05.100674Z","steps":["trace[1734896502] 'agreement among raft nodes before linearized reading'  (duration: 123.149406ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100726Z","caller":"traceutil/trace.go:172","msg":"trace[65494134] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:2; response_revision:3453; }","duration":"123.22569ms","start":"2025-11-20T21:48:04.977496Z","end":"2025-11-20T21:48:05.100722Z","steps":["trace[65494134] 'agreement among raft nodes before linearized reading'  (duration: 123.189883ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100749Z","caller":"traceutil/trace.go:172","msg":"trace[946885568] range","detail":"{range_begin:/registry/priorityclasses; range_end:; response_count:0; response_revision:3453; }","duration":"123.29692ms","start":"2025-11-20T21:48:04.977448Z","end":"2025-11-20T21:48:05.100745Z","steps":["trace[946885568] 'agreement among raft nodes before linearized reading'  (duration: 123.287205ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100772Z","caller":"traceutil/trace.go:172","msg":"trace[1602857348] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:3453; }","duration":"123.339439ms","start":"2025-11-20T21:48:04.977429Z","end":"2025-11-20T21:48:05.100768Z","steps":["trace[1602857348] 'agreement among raft nodes before linearized reading'  (duration: 123.328403ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100835Z","caller":"traceutil/trace.go:172","msg":"trace[1657109007] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:8; response_revision:3453; }","duration":"123.41807ms","start":"2025-11-20T21:48:04.977413Z","end":"2025-11-20T21:48:05.100831Z","steps":["trace[1657109007] 'agreement among raft nodes before linearized reading'  (duration: 123.366041ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100863Z","caller":"traceutil/trace.go:172","msg":"trace[256739583] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:3453; }","duration":"123.461788ms","start":"2025-11-20T21:48:04.977397Z","end":"2025-11-20T21:48:05.100859Z","steps":["trace[256739583] 'agreement among raft nodes before linearized reading'  (duration: 123.448233ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100889Z","caller":"traceutil/trace.go:172","msg":"trace[157362704] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:3453; }","duration":"123.504331ms","start":"2025-11-20T21:48:04.977378Z","end":"2025-11-20T21:48:05.100882Z","steps":["trace[157362704] 'agreement among raft nodes before linearized reading'  (duration: 123.492729ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100936Z","caller":"traceutil/trace.go:172","msg":"trace[439810993] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:2; response_revision:3453; }","duration":"123.968846ms","start":"2025-11-20T21:48:04.976963Z","end":"2025-11-20T21:48:05.100932Z","steps":["trace[439810993] 'agreement among raft nodes before linearized reading'  (duration: 123.933875ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100963Z","caller":"traceutil/trace.go:172","msg":"trace[1409698449] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:3453; }","duration":"124.019924ms","start":"2025-11-20T21:48:04.976938Z","end":"2025-11-20T21:48:05.100958Z","steps":["trace[1409698449] 'agreement among raft nodes before linearized reading'  (duration: 124.006566ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.100988Z","caller":"traceutil/trace.go:172","msg":"trace[1232400826] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:3453; }","duration":"124.21593ms","start":"2025-11-20T21:48:04.976768Z","end":"2025-11-20T21:48:05.100984Z","steps":["trace[1232400826] 'agreement among raft nodes before linearized reading'  (duration: 124.203794ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.101052Z","caller":"traceutil/trace.go:172","msg":"trace[1428873499] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:7; response_revision:3453; }","duration":"124.603382ms","start":"2025-11-20T21:48:04.976444Z","end":"2025-11-20T21:48:05.101048Z","steps":["trace[1428873499] 'agreement among raft nodes before linearized reading'  (duration: 124.551451ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.101076Z","caller":"traceutil/trace.go:172","msg":"trace[1456894827] range","detail":"{range_begin:/registry/leases; range_end:; response_count:0; response_revision:3453; }","duration":"125.518633ms","start":"2025-11-20T21:48:04.975553Z","end":"2025-11-20T21:48:05.101072Z","steps":["trace[1456894827] 'agreement among raft nodes before linearized reading'  (duration: 125.507408ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.163635Z","caller":"traceutil/trace.go:172","msg":"trace[1396962058] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:3453; }","duration":"124.48073ms","start":"2025-11-20T21:48:05.039143Z","end":"2025-11-20T21:48:05.163623Z","steps":["trace[1396962058] 'agreement among raft nodes before linearized reading'  (duration: 124.42829ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:48:05.163909Z","caller":"traceutil/trace.go:172","msg":"trace[247699851] range","detail":"{range_begin:/registry/podtemplates; range_end:; response_count:0; response_revision:3453; }","duration":"128.382177ms","start":"2025-11-20T21:48:05.035520Z","end":"2025-11-20T21:48:05.163902Z","steps":["trace[247699851] 'agreement among raft nodes before linearized reading'  (duration: 128.353606ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:52:24 up  4:34,  0 user,  load average: 0.66, 0.96, 1.27
	Linux ha-409851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454] <==
	I1120 21:51:36.212698       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:51:46.206671       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:51:46.206714       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:51:46.206892       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:51:46.206910       1 main.go:301] handling current node
	I1120 21:51:46.206925       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:51:46.206929       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:51:56.208319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:51:56.208360       1 main.go:301] handling current node
	I1120 21:51:56.208376       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:51:56.208382       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:51:56.208532       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:51:56.208547       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:52:06.212796       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:52:06.212831       1 main.go:301] handling current node
	I1120 21:52:06.212847       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:52:06.212853       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:52:06.213011       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:52:06.213024       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:52:16.213240       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:52:16.213273       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:52:16.213426       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:52:16.213439       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:52:16.213508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:52:16.213519       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21eb6c12eb9d6c645ff79035e852942fc36d120d38e6634372d84d1fff4b1c3a] <==
	I1120 21:48:05.164517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:48:05.251597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:48:05.267215       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:48:05.273069       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:48:05.273181       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:48:05.301644       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:48:05.303022       1 policy_source.go:240] refreshing policies
	I1120 21:48:05.343504       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:48:05.343769       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:48:05.344234       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:48:05.350900       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1120 21:48:05.361480       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:48:05.362670       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:48:05.370720       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:48:05.362690       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:48:11.243570       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 21:48:11.243643       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:48:11.543897       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	W1120 21:48:11.986847       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1120 21:48:11.988628       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:48:11.996638       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:48:31.545364       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:48:44.311228       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:48:46.301552       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:49:23.280882       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [d2a9e01261d927422239ac6d8aae4c4810c85777bd6fc37ddc5126a51deff4dd] <==
	{"level":"warn","ts":"2025-11-20T21:47:25.675429Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016b65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675510Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675578Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002cd61e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675620Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400212da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675648Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013d9860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675671Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000797860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675698Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400224d680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675596Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40007970e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675739Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019532c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675766Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016b6d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675801Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675829Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400276c780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675854Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675804Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013d83c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675908Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675946Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675911Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.827032Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400212da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1120 21:47:25.827154       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1120 21:47:25.827227       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1120 21:47:25.828931       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1120 21:47:25.828993       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1120 21:47:25.830257       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.94329ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	{"level":"warn","ts":"2025-11-20T21:47:26.843128Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400212da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	F1120 21:47:27.272727       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43] <==
	I1120 21:47:44.271187       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:47:45.887863       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1120 21:47:45.887899       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:47:45.889312       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1120 21:47:45.889482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1120 21:47:45.889741       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1120 21:47:45.889803       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 21:47:55.905939       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [5ccb03706c0f435e1a09ff9e7ebbe19aee8f89c6e7467182aa27e3874e6c323d] <==
	I1120 21:48:44.191236       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:48:44.191247       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 21:48:44.192321       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:48:44.194569       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:48:44.194593       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:48:44.194667       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:48:44.196895       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 21:48:44.197845       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:48:44.200483       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:48:44.201695       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:48:44.201862       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:48:44.201975       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m04"
	I1120 21:48:44.202045       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851"
	I1120 21:48:44.202137       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m02"
	I1120 21:48:44.202200       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 21:48:44.213792       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:48:44.217890       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:48:44.217972       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:48:44.218002       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:48:44.234704       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:49:23.353198       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v6gm\": the object has been modified; please apply your changes to the latest version and try again"
	I1120 21:49:23.353878       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"21992042-f6b2-485a-bd9b-decc3a3d6f7e", APIVersion:"v1", ResourceVersion:"294", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v6gm": the object has been modified; please apply your changes to the latest version and try again
	E1120 21:49:23.376944       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1120 21:49:23.392884       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v6gm\": the object has been modified; please apply your changes to the latest version and try again"
	I1120 21:49:23.393588       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"21992042-f6b2-485a-bd9b-decc3a3d6f7e", APIVersion:"v1", ResourceVersion:"294", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v6gm": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [283abd913ff4d5c1081b76097b71e66eb996220513fadc607f8f68cd50071785] <==
	I1120 21:48:42.954042       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:48:43.040713       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:48:43.141728       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:48:43.141763       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 21:48:43.141860       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:48:43.160133       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:48:43.160188       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:48:43.163678       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:48:43.163975       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:48:43.164011       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:48:43.168077       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:48:43.168182       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:48:43.168489       1 config.go:200] "Starting service config controller"
	I1120 21:48:43.168532       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:48:43.169345       1 config.go:309] "Starting node config controller"
	I1120 21:48:43.169359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:48:43.169367       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:48:43.172283       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:48:43.172357       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:48:43.268742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:48:43.268898       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:48:43.272772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bf7fd293f188a4c3116512ca8739e3ae57f6b6ac6e8e5e7a7e493804caba0ede] <==
	E1120 21:47:42.144862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:47:42.442641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:47:42.927579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:47:43.326155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:47:43.512114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:47:44.079747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:47:44.466132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:47:51.236636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:47:53.441273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:47:53.443366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:47:55.204767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:47:56.179669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:47:56.809409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:47:58.566654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:47:58.739996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:47:59.402329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 21:47:59.593992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:48:00.869852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:48:01.061027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:48:01.453651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:48:03.292850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:48:03.733908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:48:03.942583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:48:04.337599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:48:05.178246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kubelet <==
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.102858     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-7hmbf\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="562945a4-84ec-46c8-b77e-abdd9d577c9c" pod="kube-system/kindnet-7hmbf"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.116790     805 kubelet_node_status.go:124] "Node was previously registered" node="ha-409851"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.116930     805 kubelet_node_status.go:78] "Successfully registered node" node="ha-409851"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.116963     805 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.117831     805 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.123111     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"storage-provisioner\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="349c85dc-6341-43ab-b388-8734d72e3040" pod="kube-system/storage-provisioner"
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.167806     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-vip-ha-409851\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="6f4588d400318593d47cec16914af85c" pod="kube-system/kube-vip-ha-409851"
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.254640     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-4qqxh\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="2f7683fa-0199-444f-bcf4-42666203c1fa" pod="kube-system/kube-proxy-4qqxh"
	Nov 20 21:48:14 ha-409851 kubelet[805]: I1120 21:48:14.806712     805 scope.go:117] "RemoveContainer" containerID="53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43"
	Nov 20 21:48:14 ha-409851 kubelet[805]: E1120 21:48:14.806952     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-409851_kube-system(69faa2bc5061adf58d981ecf300e1cf6)\"" pod="kube-system/kube-controller-manager-ha-409851" podUID="69faa2bc5061adf58d981ecf300e1cf6"
	Nov 20 21:48:19 ha-409851 kubelet[805]: E1120 21:48:19.826466     805 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/53ae0ada8ee6b87a83c12c535b4145c039ace4d83202156f4f2fa970dd2c3e8a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/53ae0ada8ee6b87a83c12c535b4145c039ace4d83202156f4f2fa970dd2c3e8a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-409851_69faa2bc5061adf58d981ecf300e1cf6/kube-controller-manager/4.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-409851_69faa2bc5061adf58d981ecf300e1cf6/kube-controller-manager/4.log: no such file or directory
	Nov 20 21:48:26 ha-409851 kubelet[805]: I1120 21:48:26.807409     805 scope.go:117] "RemoveContainer" containerID="53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43"
	Nov 20 21:48:26 ha-409851 kubelet[805]: E1120 21:48:26.807617     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-409851_kube-system(69faa2bc5061adf58d981ecf300e1cf6)\"" pod="kube-system/kube-controller-manager-ha-409851" podUID="69faa2bc5061adf58d981ecf300e1cf6"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.761938     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jvsfx], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-66bc5c9577-vfsp6" podUID="09c1e0dd-0208-4f69-aac9-670197f4c848"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.767157     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cg4c6], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-66bc5c9577-pjk6c" podUID="ad25e130-cf9b-4f5e-b082-23c452bd1c5c"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.767157     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-rjfpv], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/kube-proxy-4qqxh" podUID="2f7683fa-0199-444f-bcf4-42666203c1fa"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.767309     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ndpsr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/kindnet-7hmbf" podUID="562945a4-84ec-46c8-b77e-abdd9d577c9c"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.768337     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jlbcp], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/storage-provisioner" podUID="349c85dc-6341-43ab-b388-8734d72e3040"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.768345     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-t5g2b], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="default/busybox-7b57f96db7-mgvhj" podUID="79106a87-339a-4b68-ad4e-12ef6b0b03ca"
	Nov 20 21:48:34 ha-409851 kubelet[805]: I1120 21:48:34.138084     805 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:48:39 ha-409851 kubelet[805]: I1120 21:48:39.807902     805 scope.go:117] "RemoveContainer" containerID="53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43"
	Nov 20 21:48:41 ha-409851 kubelet[805]: W1120 21:48:41.897097     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-fb899ea594eab05a10c91ed517e7df9f9aa7e6bbc83170c8c51036525a7aed49 WatchSource:0}: Error finding container fb899ea594eab05a10c91ed517e7df9f9aa7e6bbc83170c8c51036525a7aed49: Status 404 returned error can't find the container with id fb899ea594eab05a10c91ed517e7df9f9aa7e6bbc83170c8c51036525a7aed49
	Nov 20 21:48:41 ha-409851 kubelet[805]: W1120 21:48:41.904639     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-f1efa472989129538dbd146ad9e60aeb226bfae7468050404be039e9aa155b4b WatchSource:0}: Error finding container f1efa472989129538dbd146ad9e60aeb226bfae7468050404be039e9aa155b4b: Status 404 returned error can't find the container with id f1efa472989129538dbd146ad9e60aeb226bfae7468050404be039e9aa155b4b
	Nov 20 21:48:42 ha-409851 kubelet[805]: W1120 21:48:42.819704     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-51827a0562eaacba39d1f56d5c992f9b9551bbe843e39c04d20a809fcd02d0ac WatchSource:0}: Error finding container 51827a0562eaacba39d1f56d5c992f9b9551bbe843e39c04d20a809fcd02d0ac: Status 404 returned error can't find the container with id 51827a0562eaacba39d1f56d5c992f9b9551bbe843e39c04d20a809fcd02d0ac
	Nov 20 21:48:43 ha-409851 kubelet[805]: W1120 21:48:43.900976     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-86a0aabe892baf40a6d3f1f4805dc511b99e67d4fc88a0ce7ab2313ee6a4c7ce WatchSource:0}: Error finding container 86a0aabe892baf40a6d3f1f4805dc511b99e67d4fc88a0ce7ab2313ee6a4c7ce: Status 404 returned error can't find the container with id 86a0aabe892baf40a6d3f1f4805dc511b99e67d4fc88a0ce7ab2313ee6a4c7ce
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-409851 -n ha-409851
helpers_test.go:269: (dbg) Run:  kubectl --context ha-409851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.101056723s)
ha_test.go:309: expected profile "ha-409851" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-409851\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-409851\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-409851\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-p
lugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fals
e,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-409851
helpers_test.go:243: (dbg) docker inspect ha-409851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	        "Created": "2025-11-20T21:32:05.722530265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 893938,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:46:13.072458678Z",
	            "FinishedAt": "2025-11-20T21:46:12.348513553Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hostname",
	        "HostsPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/hosts",
	        "LogPath": "/var/lib/docker/containers/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853-json.log",
	        "Name": "/ha-409851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-409851:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-409851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853",
	                "LowerDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20be6d1b76b0fac3e91394637db4e5d8af952cef9b2dbadada94ba6079a4b3e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-409851",
	                "Source": "/var/lib/docker/volumes/ha-409851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-409851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-409851",
	                "name.minikube.sigs.k8s.io": "ha-409851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc18c8f3af5088b5bb1d9ce24d0b962e6479dd84027377689edccf3f48baefb2",
	            "SandboxKey": "/var/run/docker/netns/cc18c8f3af50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33940"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-409851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:23:29:98:04:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad232b357b1bc65babf7a48f3581b00686ef0ccc0f86acee1a57f8a071f682f1",
	                    "EndpointID": "42281e0852c3f6fd3ef3ee7cb17a8b94df54edc9c35c3a29e94bd1eb0ceadb4a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-409851",
	                        "d20916d298c9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-409851 -n ha-409851
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 logs -n 25: (1.879546045s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:36 UTC │
	│ cp      │ ha-409851 cp testdata/cp-test.txt ha-409851-m04:/home/docker/cp-test.txt                                                            │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:36 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668750254/001/cp-test_ha-409851-m04.txt │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851:/home/docker/cp-test_ha-409851-m04_ha-409851.txt                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851.txt                                                │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m02:/home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m02 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ cp      │ ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m03:/home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt              │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ ssh     │ ha-409851 ssh -n ha-409851-m03 sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node start m02 --alsologtostderr -v 5                                                                                     │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:37 UTC │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │                     │
	│ stop    │ ha-409851 stop --alsologtostderr -v 5                                                                                               │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:37 UTC │ 20 Nov 25 21:38 UTC │
	│ start   │ ha-409851 start --wait true --alsologtostderr -v 5                                                                                  │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:38 UTC │                     │
	│ node    │ ha-409851 node list --alsologtostderr -v 5                                                                                          │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │                     │
	│ node    │ ha-409851 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │ 20 Nov 25 21:45 UTC │
	│ stop    │ ha-409851 stop --alsologtostderr -v 5                                                                                               │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:45 UTC │ 20 Nov 25 21:46 UTC │
	│ start   │ ha-409851 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:46 UTC │                     │
	│ node    │ ha-409851 node add --control-plane --alsologtostderr -v 5                                                                           │ ha-409851 │ jenkins │ v1.37.0 │ 20 Nov 25 21:52 UTC │ 20 Nov 25 21:53 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:46:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:46:12.791438  893814 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:46:12.791547  893814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.791556  893814 out.go:374] Setting ErrFile to fd 2...
	I1120 21:46:12.791561  893814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.791812  893814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:46:12.792153  893814 out.go:368] Setting JSON to false
	I1120 21:46:12.792975  893814 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16098,"bootTime":1763659075,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:46:12.793039  893814 start.go:143] virtualization:  
	I1120 21:46:12.796567  893814 out.go:179] * [ha-409851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:46:12.800274  893814 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:46:12.800333  893814 notify.go:221] Checking for updates...
	I1120 21:46:12.805930  893814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:46:12.808740  893814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:12.811665  893814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:46:12.814590  893814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:46:12.817489  893814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:46:12.820869  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:12.821456  893814 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:46:12.854504  893814 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:46:12.854629  893814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:46:12.916245  893814 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:46:12.907017867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:46:12.916354  893814 docker.go:319] overlay module found
	I1120 21:46:12.921281  893814 out.go:179] * Using the docker driver based on existing profile
	I1120 21:46:12.924086  893814 start.go:309] selected driver: docker
	I1120 21:46:12.924103  893814 start.go:930] validating driver "docker" against &{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:12.924235  893814 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:46:12.924335  893814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:46:12.982109  893814 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-20 21:46:12.972838498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:46:12.982542  893814 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:46:12.982605  893814 cni.go:84] Creating CNI manager for ""
	I1120 21:46:12.982654  893814 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1120 21:46:12.982705  893814 start.go:353] cluster config:
	{Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:12.987881  893814 out.go:179] * Starting "ha-409851" primary control-plane node in "ha-409851" cluster
	I1120 21:46:12.990803  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:46:12.993745  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:46:12.996606  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:12.996692  893814 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:46:12.996690  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:46:12.996704  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:46:12.996891  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:46:12.996899  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:46:12.997043  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:13.017636  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:46:13.017661  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:46:13.017680  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:46:13.017708  893814 start.go:360] acquireMachinesLock for ha-409851: {Name:mk8d4d263fd846febb903e54335147f9d639d302 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:46:13.017784  893814 start.go:364] duration metric: took 50.068µs to acquireMachinesLock for "ha-409851"
	I1120 21:46:13.017814  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:46:13.017825  893814 fix.go:54] fixHost starting: 
	I1120 21:46:13.018084  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:46:13.035594  893814 fix.go:112] recreateIfNeeded on ha-409851: state=Stopped err=<nil>
	W1120 21:46:13.035627  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:46:13.038907  893814 out.go:252] * Restarting existing docker container for "ha-409851" ...
	I1120 21:46:13.039022  893814 cli_runner.go:164] Run: docker start ha-409851
	I1120 21:46:13.304460  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:46:13.328120  893814 kic.go:430] container "ha-409851" state is running.
	I1120 21:46:13.328719  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:13.354344  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:13.354582  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:46:13.354651  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:13.379550  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:13.379870  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:13.379890  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:46:13.380728  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:46:16.522806  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:46:16.522896  893814 ubuntu.go:182] provisioning hostname "ha-409851"
	I1120 21:46:16.523007  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.540197  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:16.540514  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:16.540535  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851 && echo "ha-409851" | sudo tee /etc/hostname
	I1120 21:46:16.694351  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851
	
	I1120 21:46:16.694434  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.711779  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:16.712102  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:16.712124  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:46:16.851168  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:46:16.851196  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:46:16.851221  893814 ubuntu.go:190] setting up certificates
	I1120 21:46:16.851230  893814 provision.go:84] configureAuth start
	I1120 21:46:16.851299  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:16.868945  893814 provision.go:143] copyHostCerts
	I1120 21:46:16.868995  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:16.869035  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:46:16.869055  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:16.869140  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:46:16.869236  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:16.869258  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:46:16.869266  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:16.869304  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:46:16.869353  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:16.869373  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:46:16.869384  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:16.869416  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:46:16.869469  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851 san=[127.0.0.1 192.168.49.2 ha-409851 localhost minikube]
	I1120 21:46:16.952356  893814 provision.go:177] copyRemoteCerts
	I1120 21:46:16.952425  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:46:16.952478  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:16.973308  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.074564  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:46:17.074634  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1120 21:46:17.091858  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:46:17.091917  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:46:17.109606  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:46:17.109674  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:46:17.127878  893814 provision.go:87] duration metric: took 276.622438ms to configureAuth
	I1120 21:46:17.127903  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:46:17.128138  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:17.128246  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.145230  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:17.145555  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1120 21:46:17.145568  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:46:17.521503  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:46:17.521523  893814 machine.go:97] duration metric: took 4.166931199s to provisionDockerMachine
	I1120 21:46:17.521535  893814 start.go:293] postStartSetup for "ha-409851" (driver="docker")
	I1120 21:46:17.521545  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:46:17.521607  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:46:17.521648  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.543040  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.642924  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:46:17.646266  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:46:17.646295  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:46:17.646306  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:46:17.646362  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:46:17.646441  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:46:17.646453  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:46:17.646557  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:46:17.654029  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:17.671759  893814 start.go:296] duration metric: took 150.208491ms for postStartSetup
	I1120 21:46:17.671861  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:46:17.671903  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.688970  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.788149  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:46:17.792950  893814 fix.go:56] duration metric: took 4.775117155s for fixHost
	I1120 21:46:17.792985  893814 start.go:83] releasing machines lock for "ha-409851", held for 4.775188491s
	I1120 21:46:17.793094  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:46:17.811172  893814 ssh_runner.go:195] Run: cat /version.json
	I1120 21:46:17.811227  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.811496  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:46:17.811569  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:46:17.830577  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:17.847514  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:46:18.032855  893814 ssh_runner.go:195] Run: systemctl --version
	I1120 21:46:18.039676  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:46:18.084631  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:46:18.089315  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:46:18.089397  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:46:18.097880  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:46:18.097906  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:46:18.097957  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:46:18.098046  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:46:18.113581  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:46:18.127110  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:46:18.127198  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:46:18.143327  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:46:18.156859  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:46:18.285846  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:46:18.406177  893814 docker.go:234] disabling docker service ...
	I1120 21:46:18.406303  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:46:18.422621  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:46:18.436488  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:46:18.557150  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:46:18.669376  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:46:18.683020  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:46:18.696701  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:46:18.696805  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.705450  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:46:18.705544  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.714727  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.724078  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.733001  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:46:18.741246  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.750057  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.758559  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:18.767154  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:46:18.774675  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:46:18.782542  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:18.908183  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:46:19.102647  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:46:19.102768  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:46:19.107633  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:46:19.107713  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:46:19.112020  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:46:19.139825  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:46:19.139929  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:46:19.171276  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:46:19.211415  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:46:19.214291  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:46:19.231738  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:46:19.235755  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:46:19.246147  893814 kubeadm.go:884] updating cluster {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:46:19.246304  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:19.246367  893814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:46:19.290538  893814 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:46:19.290565  893814 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:46:19.290626  893814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:46:19.316155  893814 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:46:19.316180  893814 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:46:19.316189  893814 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 21:46:19.316303  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:46:19.316387  893814 ssh_runner.go:195] Run: crio config
	I1120 21:46:19.371279  893814 cni.go:84] Creating CNI manager for ""
	I1120 21:46:19.371300  893814 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1120 21:46:19.371316  893814 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:46:19.371339  893814 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-409851 NodeName:ha-409851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:46:19.371462  893814 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-409851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:46:19.371484  893814 kube-vip.go:115] generating kube-vip config ...
	I1120 21:46:19.371537  893814 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:46:19.384116  893814 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:46:19.384238  893814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:46:19.384326  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:46:19.392356  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:46:19.392430  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 21:46:19.400069  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 21:46:19.413705  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:46:19.427554  893814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1120 21:46:19.440926  893814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:46:19.454200  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:46:19.457772  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:46:19.467840  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:19.582412  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:46:19.599710  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.2
	I1120 21:46:19.599791  893814 certs.go:195] generating shared ca certs ...
	I1120 21:46:19.599822  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.599996  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:46:19.600074  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:46:19.600106  893814 certs.go:257] generating profile certs ...
	I1120 21:46:19.600223  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:46:19.600276  893814 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee
	I1120 21:46:19.600310  893814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1120 21:46:19.750831  893814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee ...
	I1120 21:46:19.750905  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee: {Name:mk539a3dda8a36b48c6c5c30b7491f9043b065a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.751146  893814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee ...
	I1120 21:46:19.751277  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee: {Name:mk851c2f98f193e8bb483e43db8a657c69eae8b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:19.751416  893814 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt.8e76f7ee -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt
	I1120 21:46:19.751615  893814 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.8e76f7ee -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key
	I1120 21:46:19.751796  893814 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:46:19.751838  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:46:19.751886  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:46:19.751918  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:46:19.751961  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:46:19.751995  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:46:19.752027  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:46:19.752070  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:46:19.752104  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:46:19.752174  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:46:19.752242  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:46:19.752268  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:46:19.752317  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:46:19.752367  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:46:19.752427  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:46:19.752538  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:19.752606  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:46:19.752639  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:46:19.752686  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:19.753263  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:46:19.782536  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:46:19.807080  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:46:19.842006  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:46:19.863690  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:46:19.882351  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:46:19.902131  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:46:19.923247  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:46:19.943308  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:46:19.961281  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:46:19.981823  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:46:19.999815  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:46:20.019398  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:46:20.026511  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.035530  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:46:20.043827  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.048146  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.048252  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:46:20.090685  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:46:20.099210  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.107103  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:46:20.115263  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.119310  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.119405  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:46:20.160958  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:46:20.168922  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.176806  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:46:20.184554  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.188641  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.188742  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:46:20.232577  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:46:20.246815  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:46:20.252000  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:46:20.307993  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:46:20.361067  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:46:20.404267  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:46:20.471141  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:46:20.556774  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:46:20.620581  893814 kubeadm.go:401] StartCluster: {Name:ha-409851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:46:20.620772  893814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:46:20.620872  893814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:46:20.672595  893814 cri.go:89] found id: "e758e4601a79aacd9dd015c82692281d156d9100d6bc2fb480b11d07ff223294"
	I1120 21:46:20.672675  893814 cri.go:89] found id: "bf7fd293f188a4c3116512ca8739e3ae57f6b6ac6e8e5e7a7e493804caba0ede"
	I1120 21:46:20.672702  893814 cri.go:89] found id: "29879cb03dd0a43326e4e6e94a9bec4cf49f8356cb3cf208c0a562ed783bb2de"
	I1120 21:46:20.672723  893814 cri.go:89] found id: "d2a9e01261d927422239ac6d8aae4c4810c85777bd6fc37ddc5126a51deff4dd"
	I1120 21:46:20.672755  893814 cri.go:89] found id: "538778f2e99f0831684f744a21c231b476e72c223d7af53829698631c58b4b38"
	I1120 21:46:20.672779  893814 cri.go:89] found id: ""
	I1120 21:46:20.672864  893814 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:46:20.692788  893814 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:46:20Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:46:20.692935  893814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:46:20.704191  893814 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:46:20.704251  893814 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:46:20.704341  893814 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:46:20.715485  893814 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:46:20.716011  893814 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-409851" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:20.716179  893814 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "ha-409851" cluster setting kubeconfig missing "ha-409851" context setting]
	I1120 21:46:20.716543  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.717160  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:46:20.717985  893814 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 21:46:20.718059  893814 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 21:46:20.718131  893814 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 21:46:20.718157  893814 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 21:46:20.718177  893814 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 21:46:20.718212  893814 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 21:46:20.730102  893814 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:46:20.744141  893814 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 21:46:20.744165  893814 kubeadm.go:602] duration metric: took 39.885836ms to restartPrimaryControlPlane
	I1120 21:46:20.744174  893814 kubeadm.go:403] duration metric: took 123.603025ms to StartCluster
	I1120 21:46:20.744191  893814 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.744256  893814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:46:20.744888  893814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:46:20.745066  893814 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:46:20.745084  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:46:20.745100  893814 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:46:20.745725  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:20.751118  893814 out.go:179] * Enabled addons: 
	I1120 21:46:20.754039  893814 addons.go:515] duration metric: took 8.930638ms for enable addons: enabled=[]
	I1120 21:46:20.754080  893814 start.go:247] waiting for cluster config update ...
	I1120 21:46:20.754090  893814 start.go:256] writing updated cluster config ...
	I1120 21:46:20.757337  893814 out.go:203] 
	I1120 21:46:20.760537  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:20.760717  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:20.764214  893814 out.go:179] * Starting "ha-409851-m02" control-plane node in "ha-409851" cluster
	I1120 21:46:20.767355  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:46:20.770446  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:46:20.773470  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:46:20.773563  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:46:20.773537  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:46:20.773902  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:46:20.773939  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:46:20.774117  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:20.801641  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:46:20.801660  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:46:20.801671  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:46:20.801698  893814 start.go:360] acquireMachinesLock for ha-409851-m02: {Name:mka809540f7c511f76e83dac3b1218011243fbec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:46:20.801748  893814 start.go:364] duration metric: took 35.446µs to acquireMachinesLock for "ha-409851-m02"
	I1120 21:46:20.801767  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:46:20.801774  893814 fix.go:54] fixHost starting: m02
	I1120 21:46:20.802025  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:46:20.830914  893814 fix.go:112] recreateIfNeeded on ha-409851-m02: state=Stopped err=<nil>
	W1120 21:46:20.830963  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:46:20.835462  893814 out.go:252] * Restarting existing docker container for "ha-409851-m02" ...
	I1120 21:46:20.835556  893814 cli_runner.go:164] Run: docker start ha-409851-m02
	I1120 21:46:21.218686  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:46:21.252602  893814 kic.go:430] container "ha-409851-m02" state is running.
	I1120 21:46:21.252990  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:21.287738  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:46:21.288165  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:46:21.288242  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:21.321625  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:21.321986  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:21.322003  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:46:21.324132  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50986->127.0.0.1:33942: read: connection reset by peer
	I1120 21:46:24.541429  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:46:24.541464  893814 ubuntu.go:182] provisioning hostname "ha-409851-m02"
	I1120 21:46:24.541536  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:24.591123  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:24.591436  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:24.591454  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m02 && echo "ha-409851-m02" | sudo tee /etc/hostname
	I1120 21:46:24.829670  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m02
	
	I1120 21:46:24.830508  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:24.868680  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:24.868993  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:24.869016  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:46:25.086415  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:46:25.086446  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:46:25.086467  893814 ubuntu.go:190] setting up certificates
	I1120 21:46:25.086477  893814 provision.go:84] configureAuth start
	I1120 21:46:25.086545  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:25.116440  893814 provision.go:143] copyHostCerts
	I1120 21:46:25.116492  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:25.116528  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:46:25.116540  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:46:25.116614  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:46:25.116704  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:25.116727  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:46:25.116737  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:46:25.116766  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:46:25.116814  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:25.116842  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:46:25.116852  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:46:25.116880  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:46:25.116934  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m02 san=[127.0.0.1 192.168.49.3 ha-409851-m02 localhost minikube]
	I1120 21:46:25.299085  893814 provision.go:177] copyRemoteCerts
	I1120 21:46:25.299152  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:46:25.299205  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:25.334304  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:25.454142  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:46:25.454207  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:46:25.519452  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:46:25.519523  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:46:25.579807  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:46:25.579872  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:46:25.625625  893814 provision.go:87] duration metric: took 539.133654ms to configureAuth
	I1120 21:46:25.625654  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:46:25.625881  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:25.626005  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:25.676739  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:46:25.677055  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1120 21:46:25.677078  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:46:27.313592  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:46:27.313611  893814 machine.go:97] duration metric: took 6.025425517s to provisionDockerMachine
	I1120 21:46:27.313622  893814 start.go:293] postStartSetup for "ha-409851-m02" (driver="docker")
	I1120 21:46:27.313633  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:46:27.313709  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:46:27.313760  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.348890  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.472301  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:46:27.476588  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:46:27.476614  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:46:27.476626  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:46:27.476683  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:46:27.476757  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:46:27.476765  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:46:27.476876  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:46:27.485018  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:46:27.504498  893814 start.go:296] duration metric: took 190.860481ms for postStartSetup
	I1120 21:46:27.504660  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:46:27.504741  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.528788  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.644723  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:46:27.649843  893814 fix.go:56] duration metric: took 6.84806345s for fixHost
	I1120 21:46:27.649868  893814 start.go:83] releasing machines lock for "ha-409851-m02", held for 6.848112263s
	I1120 21:46:27.649945  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m02
	I1120 21:46:27.674188  893814 out.go:179] * Found network options:
	I1120 21:46:27.677242  893814 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 21:46:27.680124  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:46:27.680168  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:46:27.680244  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:46:27.680247  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:46:27.680288  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.680307  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m02
	I1120 21:46:27.700610  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.707137  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m02/id_rsa Username:docker}
	I1120 21:46:27.925105  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:46:28.059572  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:46:28.059657  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:46:28.074369  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:46:28.074399  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:46:28.074432  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:46:28.074499  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:46:28.097384  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:46:28.115088  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:46:28.115159  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:46:28.145681  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:46:28.169842  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:46:28.395806  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:46:28.633186  893814 docker.go:234] disabling docker service ...
	I1120 21:46:28.633295  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:46:28.653639  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:46:28.673051  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:46:28.911134  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:46:29.139790  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:46:29.165309  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:46:29.189385  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:46:29.189499  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.203577  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:46:29.203723  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.219781  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.229964  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.247451  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:46:29.257774  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.270135  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.279629  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:46:29.289968  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:46:29.299527  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:46:29.308385  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:46:29.625535  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:47:59.900415  893814 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.274799929s)
	I1120 21:47:59.900439  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:47:59.900493  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:47:59.904340  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:47:59.904408  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:47:59.908141  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:47:59.934786  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:47:59.934878  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:47:59.970641  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:00.031101  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:48:00.052822  893814 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:48:00.070551  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:48:00.144325  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:48:00.158851  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:00.193319  893814 mustload.go:66] Loading cluster: ha-409851
	I1120 21:48:00.193638  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:00.193952  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:48:00.257208  893814 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:48:00.257542  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.3
	I1120 21:48:00.257559  893814 certs.go:195] generating shared ca certs ...
	I1120 21:48:00.257575  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:48:00.257700  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:48:00.257744  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:48:00.257751  893814 certs.go:257] generating profile certs ...
	I1120 21:48:00.257839  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key
	I1120 21:48:00.257904  893814 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key.e3c52656
	I1120 21:48:00.257941  893814 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key
	I1120 21:48:00.257951  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:48:00.257964  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:48:00.257975  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:48:00.257985  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:48:00.257997  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 21:48:00.258009  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 21:48:00.258021  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 21:48:00.258032  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 21:48:00.258087  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:48:00.258118  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:48:00.258141  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:48:00.258171  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:48:00.258206  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:48:00.258229  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:48:00.258276  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:00.258311  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.258325  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.258342  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:00.258416  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:48:00.286658  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:48:00.411419  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 21:48:00.416825  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 21:48:00.429106  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 21:48:00.434141  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 21:48:00.446859  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 21:48:00.451932  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 21:48:00.463743  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 21:48:00.468370  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 21:48:00.478967  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 21:48:00.483728  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 21:48:00.495516  893814 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 21:48:00.499782  893814 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 21:48:00.510022  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:48:00.533411  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:48:00.557609  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:48:00.579641  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:48:00.599346  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:48:00.622831  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:48:00.643496  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:48:00.662349  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:48:00.681048  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:48:00.700389  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:48:00.721204  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:48:00.741591  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 21:48:00.755291  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 21:48:00.769986  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 21:48:00.784853  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 21:48:00.798923  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 21:48:00.812361  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 21:48:00.826911  893814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 21:48:00.842313  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:48:00.849394  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.857032  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:48:00.864532  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.868398  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.868472  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:48:00.910592  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:48:00.918458  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.926263  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:48:00.934304  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.938442  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.938531  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:48:00.987101  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:48:00.995288  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.003879  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:48:01.012703  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.016823  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.016924  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:01.059233  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:48:01.068459  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:48:01.072670  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:48:01.115135  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:48:01.157870  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:48:01.200156  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:48:01.244244  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:48:01.286456  893814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:48:01.333479  893814 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 21:48:01.333592  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:48:01.333632  893814 kube-vip.go:115] generating kube-vip config ...
	I1120 21:48:01.333685  893814 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 21:48:01.347658  893814 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:48:01.347774  893814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 21:48:01.347874  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:48:01.355891  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:48:01.355970  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 21:48:01.364043  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:48:01.379594  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:48:01.393213  893814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 21:48:01.408709  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:48:01.412906  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:01.423617  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:01.551671  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:01.569302  893814 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:48:01.569783  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:01.575430  893814 out.go:179] * Verifying Kubernetes components...
	I1120 21:48:01.578446  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:01.722511  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:01.736860  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:48:01.736934  893814 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:48:01.737186  893814 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:48:04.960847  893814 node_ready.go:49] node "ha-409851-m02" is "Ready"
	I1120 21:48:04.960925  893814 node_ready.go:38] duration metric: took 3.223709398s for node "ha-409851-m02" to be "Ready" ...
	I1120 21:48:04.960953  893814 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:48:04.961033  893814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:48:05.021304  893814 api_server.go:72] duration metric: took 3.451906522s to wait for apiserver process to appear ...
	I1120 21:48:05.021328  893814 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:48:05.021347  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:05.086025  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:48:05.086102  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:48:05.521475  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:05.533319  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:05.533405  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:06.022053  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:06.033112  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:06.033164  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:06.521455  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:06.532108  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:06.532149  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:07.021472  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:07.033567  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:07.033607  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:07.522248  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:07.530734  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:07.530766  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:08.021549  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:08.030067  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:08.030107  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:08.521458  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:08.536690  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:08.536723  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:09.022442  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:09.030694  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:09.030720  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:09.522023  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:09.532358  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:09.532394  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:10.022104  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:10.033572  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:10.033669  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:10.521893  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:10.530183  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:10.530209  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:11.022029  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:11.030471  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:11.030511  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:11.522184  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:11.530808  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:48:11.530915  893814 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:48:12.021498  893814 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 21:48:12.034571  893814 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 21:48:12.037300  893814 api_server.go:141] control plane version: v1.34.1
	I1120 21:48:12.037383  893814 api_server.go:131] duration metric: took 7.016046235s to wait for apiserver health ...
	I1120 21:48:12.037406  893814 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:48:12.048906  893814 system_pods.go:59] 26 kube-system pods found
	I1120 21:48:12.049004  893814 system_pods.go:61] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.049030  893814 system_pods.go:61] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.049050  893814 system_pods.go:61] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:48:12.049082  893814 system_pods.go:61] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:48:12.049108  893814 system_pods.go:61] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:48:12.049158  893814 system_pods.go:61] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:48:12.049176  893814 system_pods.go:61] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:48:12.049198  893814 system_pods.go:61] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:48:12.049233  893814 system_pods.go:61] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:48:12.049257  893814 system_pods.go:61] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:48:12.049279  893814 system_pods.go:61] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:48:12.049316  893814 system_pods.go:61] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:48:12.049340  893814 system_pods.go:61] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.049361  893814 system_pods.go:61] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:48:12.049397  893814 system_pods.go:61] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.049417  893814 system_pods.go:61] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:48:12.049437  893814 system_pods.go:61] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:48:12.049467  893814 system_pods.go:61] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:48:12.049494  893814 system_pods.go:61] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:48:12.049514  893814 system_pods.go:61] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:48:12.049534  893814 system_pods.go:61] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:48:12.049569  893814 system_pods.go:61] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:48:12.049590  893814 system_pods.go:61] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:48:12.049611  893814 system_pods.go:61] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:48:12.049637  893814 system_pods.go:61] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:48:12.049658  893814 system_pods.go:61] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:48:12.049682  893814 system_pods.go:74] duration metric: took 12.253231ms to wait for pod list to return data ...
	I1120 21:48:12.049715  893814 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:48:12.054143  893814 default_sa.go:45] found service account: "default"
	I1120 21:48:12.054233  893814 default_sa.go:55] duration metric: took 4.491625ms for default service account to be created ...
	I1120 21:48:12.054260  893814 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:48:12.060879  893814 system_pods.go:86] 26 kube-system pods found
	I1120 21:48:12.060981  893814 system_pods.go:89] "coredns-66bc5c9577-pjk6c" [ad25e130-cf9b-4f5e-b082-23c452bd1c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.061047  893814 system_pods.go:89] "coredns-66bc5c9577-vfsp6" [09c1e0dd-0208-4f69-aac9-670197f4c848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:48:12.061081  893814 system_pods.go:89] "etcd-ha-409851" [f7d7a996-2988-4dbc-8257-3a2c4e2702ea] Running
	I1120 21:48:12.061118  893814 system_pods.go:89] "etcd-ha-409851-m02" [52c37de9-adc4-4376-8e31-46d3db24a767] Running
	I1120 21:48:12.061152  893814 system_pods.go:89] "etcd-ha-409851-m03" [6a07e989-c136-4324-b3e7-7002b12c80a3] Running
	I1120 21:48:12.061181  893814 system_pods.go:89] "kindnet-27z7m" [e02020db-ed1d-4ee5-84c5-580083b7a667] Running
	I1120 21:48:12.061223  893814 system_pods.go:89] "kindnet-2d5r9" [3fea6a82-25d1-414f-b734-0853d96fbd20] Running
	I1120 21:48:12.061271  893814 system_pods.go:89] "kindnet-56lr8" [8ca0a226-7ec9-45ad-865f-6374f3c0eb31] Running
	I1120 21:48:12.061294  893814 system_pods.go:89] "kindnet-7hmbf" [562945a4-84ec-46c8-b77e-abdd9d577c9c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:48:12.061323  893814 system_pods.go:89] "kube-apiserver-ha-409851" [8a78cd3e-73fb-4c99-9597-599efd2f72bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:48:12.061400  893814 system_pods.go:89] "kube-apiserver-ha-409851-m02" [e1078831-0b81-402d-9f83-fa15b7b2d348] Running
	I1120 21:48:12.061442  893814 system_pods.go:89] "kube-apiserver-ha-409851-m03" [b5e92fc4-b292-4275-993b-79c7bf8001e4] Running
	I1120 21:48:12.061465  893814 system_pods.go:89] "kube-controller-manager-ha-409851" [48f753e0-189d-4b2a-a31c-e017d6ddf75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.061496  893814 system_pods.go:89] "kube-controller-manager-ha-409851-m02" [4688079e-5a79-45e4-b5ec-955c881c865e] Running
	I1120 21:48:12.061529  893814 system_pods.go:89] "kube-controller-manager-ha-409851-m03" [58a68fae-7334-470e-8458-ab6fbbaadbf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:48:12.061551  893814 system_pods.go:89] "kube-proxy-4qqxh" [2f7683fa-0199-444f-bcf4-42666203c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:48:12.061574  893814 system_pods.go:89] "kube-proxy-jh55s" [d4884cb3-7650-4842-95ff-e077dc982bcc] Running
	I1120 21:48:12.061605  893814 system_pods.go:89] "kube-proxy-pz7vt" [dbc87cfd-0cae-4ccc-9a48-8b33af4c840e] Running
	I1120 21:48:12.061634  893814 system_pods.go:89] "kube-proxy-xnhl6" [4d828c3c-acdc-4434-a5fe-53224431b5c7] Running
	I1120 21:48:12.061656  893814 system_pods.go:89] "kube-scheduler-ha-409851" [625f953f-8f87-4f3f-bbaf-ca762aab8119] Running
	I1120 21:48:12.061691  893814 system_pods.go:89] "kube-scheduler-ha-409851-m02" [31e4a0da-f6a8-469b-a844-bf70fa6614b6] Running
	I1120 21:48:12.061711  893814 system_pods.go:89] "kube-scheduler-ha-409851-m03" [22490b9d-cc1d-4360-bfae-e2915029e33b] Running
	I1120 21:48:12.061741  893814 system_pods.go:89] "kube-vip-ha-409851" [952fa273-4854-4256-90e3-24c3e408041c] Running
	I1120 21:48:12.061774  893814 system_pods.go:89] "kube-vip-ha-409851-m02" [731d2d1e-089e-4e65-ba76-32a350424d62] Running
	I1120 21:48:12.061808  893814 system_pods.go:89] "kube-vip-ha-409851-m03" [6c261aec-8543-40b7-bdf6-928b2de2f764] Running
	I1120 21:48:12.061865  893814 system_pods.go:89] "storage-provisioner" [349c85dc-6341-43ab-b388-8734d72e3040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:48:12.061888  893814 system_pods.go:126] duration metric: took 7.607421ms to wait for k8s-apps to be running ...
	I1120 21:48:12.061910  893814 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:48:12.062033  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:48:12.076739  893814 system_svc.go:56] duration metric: took 14.81844ms WaitForService to wait for kubelet
	I1120 21:48:12.076837  893814 kubeadm.go:587] duration metric: took 10.507445578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:48:12.076873  893814 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:48:12.086832  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.086926  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.086951  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.086971  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.087052  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:12.087072  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:12.087105  893814 node_conditions.go:105] duration metric: took 10.20235ms to run NodePressure ...
	I1120 21:48:12.087136  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:48:12.087208  893814 start.go:256] writing updated cluster config ...
	I1120 21:48:12.090921  893814 out.go:203] 
	I1120 21:48:12.094218  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:12.094393  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.097669  893814 out.go:179] * Starting "ha-409851-m04" worker node in "ha-409851" cluster
	I1120 21:48:12.101322  893814 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:48:12.106565  893814 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:48:12.109717  893814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:48:12.109827  893814 cache.go:65] Caching tarball of preloaded images
	I1120 21:48:12.109799  893814 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:48:12.110177  893814 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 21:48:12.110212  893814 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:48:12.110403  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.132566  893814 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:48:12.132590  893814 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:48:12.132610  893814 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:48:12.132636  893814 start.go:360] acquireMachinesLock for ha-409851-m04: {Name:mk87280fc97adfe0461a2851d285457d7b179a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:48:12.132693  893814 start.go:364] duration metric: took 36.636µs to acquireMachinesLock for "ha-409851-m04"
	I1120 21:48:12.132719  893814 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:48:12.132728  893814 fix.go:54] fixHost starting: m04
	I1120 21:48:12.132989  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:48:12.154532  893814 fix.go:112] recreateIfNeeded on ha-409851-m04: state=Stopped err=<nil>
	W1120 21:48:12.154570  893814 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:48:12.157790  893814 out.go:252] * Restarting existing docker container for "ha-409851-m04" ...
	I1120 21:48:12.157940  893814 cli_runner.go:164] Run: docker start ha-409851-m04
	I1120 21:48:12.427421  893814 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:48:12.449849  893814 kic.go:430] container "ha-409851-m04" state is running.
	I1120 21:48:12.450339  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:12.476563  893814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/config.json ...
	I1120 21:48:12.476804  893814 machine.go:94] provisionDockerMachine start ...
	I1120 21:48:12.476866  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:12.503516  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:12.503831  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:12.503851  893814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:48:12.506827  893814 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:48:15.671577  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:48:15.671648  893814 ubuntu.go:182] provisioning hostname "ha-409851-m04"
	I1120 21:48:15.671727  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:15.694098  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:15.694405  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:15.694422  893814 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-409851-m04 && echo "ha-409851-m04" | sudo tee /etc/hostname
	I1120 21:48:15.858000  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-409851-m04
	
	I1120 21:48:15.858085  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:15.876926  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:15.877279  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:15.877303  893814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-409851-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-409851-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-409851-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:48:16.029401  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:48:16.029428  893814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 21:48:16.029445  893814 ubuntu.go:190] setting up certificates
	I1120 21:48:16.029456  893814 provision.go:84] configureAuth start
	I1120 21:48:16.029533  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:16.048090  893814 provision.go:143] copyHostCerts
	I1120 21:48:16.048141  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:48:16.048175  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 21:48:16.048187  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 21:48:16.048261  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 21:48:16.048383  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:48:16.048401  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 21:48:16.048406  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 21:48:16.048432  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 21:48:16.048499  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:48:16.048515  893814 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 21:48:16.048520  893814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 21:48:16.048545  893814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 21:48:16.048600  893814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.ha-409851-m04 san=[127.0.0.1 192.168.49.5 ha-409851-m04 localhost minikube]
	I1120 21:48:16.265083  893814 provision.go:177] copyRemoteCerts
	I1120 21:48:16.265160  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:48:16.265209  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.290442  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:16.396414  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:48:16.396484  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:48:16.418369  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:48:16.418439  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:48:16.437910  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:48:16.437992  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:48:16.456712  893814 provision.go:87] duration metric: took 427.242108ms to configureAuth
	I1120 21:48:16.456739  893814 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:48:16.457027  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:16.457179  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.476563  893814 main.go:143] libmachine: Using SSH client type: native
	I1120 21:48:16.477370  893814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1120 21:48:16.477578  893814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:48:16.833311  893814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:48:16.833334  893814 machine.go:97] duration metric: took 4.356521136s to provisionDockerMachine
	I1120 21:48:16.833346  893814 start.go:293] postStartSetup for "ha-409851-m04" (driver="docker")
	I1120 21:48:16.833356  893814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:48:16.833422  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:48:16.833480  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:16.855465  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:16.967534  893814 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:48:16.970900  893814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:48:16.970931  893814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:48:16.970942  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 21:48:16.971037  893814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 21:48:16.971121  893814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 21:48:16.971132  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /etc/ssl/certs/8368522.pem
	I1120 21:48:16.971248  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:48:16.980647  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:17.001479  893814 start.go:296] duration metric: took 168.114968ms for postStartSetup
	I1120 21:48:17.001571  893814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:48:17.001627  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.030384  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.140073  893814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:48:17.144863  893814 fix.go:56] duration metric: took 5.012127885s for fixHost
	I1120 21:48:17.144890  893814 start.go:83] releasing machines lock for "ha-409851-m04", held for 5.012183123s
	I1120 21:48:17.144964  893814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:48:17.172547  893814 out.go:179] * Found network options:
	I1120 21:48:17.175556  893814 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 21:48:17.178404  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178431  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178457  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 21:48:17.178669  893814 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 21:48:17.178737  893814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:48:17.178785  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.178630  893814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:48:17.178897  893814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:48:17.197245  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.203292  893814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:48:17.340122  893814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:48:17.405989  893814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:48:17.406071  893814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:48:17.414439  893814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:48:17.414465  893814 start.go:496] detecting cgroup driver to use...
	I1120 21:48:17.414498  893814 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:48:17.414553  893814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:48:17.430500  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:48:17.443843  893814 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:48:17.443906  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:48:17.460231  893814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:48:17.475600  893814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:48:17.602698  893814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:48:17.729597  893814 docker.go:234] disabling docker service ...
	I1120 21:48:17.729663  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:48:17.746588  893814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:48:17.760617  893814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:48:17.897973  893814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:48:18.030520  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:48:18.046315  893814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:48:18.066053  893814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:48:18.066129  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.077050  893814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:48:18.077175  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.090079  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.100829  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.110671  893814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:48:18.121922  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.135640  893814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.145103  893814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:48:18.155094  893814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:48:18.164129  893814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:48:18.171842  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:18.297944  893814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:48:18.470275  893814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:48:18.470358  893814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:48:18.479108  893814 start.go:564] Will wait 60s for crictl version
	I1120 21:48:18.479175  893814 ssh_runner.go:195] Run: which crictl
	I1120 21:48:18.483098  893814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:48:18.507764  893814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:48:18.507924  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:18.539112  893814 ssh_runner.go:195] Run: crio --version
	I1120 21:48:18.574786  893814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:48:18.577738  893814 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 21:48:18.580677  893814 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 21:48:18.583863  893814 cli_runner.go:164] Run: docker network inspect ha-409851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:48:18.602824  893814 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 21:48:18.606736  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:18.616366  893814 mustload.go:66] Loading cluster: ha-409851
	I1120 21:48:18.616605  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:18.616854  893814 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:48:18.635714  893814 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:48:18.635989  893814 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851 for IP: 192.168.49.5
	I1120 21:48:18.636005  893814 certs.go:195] generating shared ca certs ...
	I1120 21:48:18.636021  893814 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:48:18.636154  893814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 21:48:18.636201  893814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 21:48:18.636216  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 21:48:18.636245  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 21:48:18.636262  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 21:48:18.636274  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 21:48:18.636332  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 21:48:18.636367  893814 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 21:48:18.636380  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:48:18.636406  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:48:18.636432  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:48:18.636458  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 21:48:18.636503  893814 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 21:48:18.636535  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.636553  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.636564  893814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem -> /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.636585  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:48:18.657556  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:48:18.675080  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:48:18.694571  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 21:48:18.716226  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 21:48:18.739895  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:48:18.768046  893814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 21:48:18.787993  893814 ssh_runner.go:195] Run: openssl version
	I1120 21:48:18.794810  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.802541  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 21:48:18.810498  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.814300  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.814368  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 21:48:18.856630  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:48:18.864919  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.872737  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:48:18.880590  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.884848  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.884916  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:48:18.931413  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:48:18.939099  893814 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.946583  893814 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 21:48:18.954298  893814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.960087  893814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 21:48:18.960197  893814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 21:48:19.002435  893814 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:48:19.012167  893814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:48:19.016432  893814 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:48:19.016483  893814 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 21:48:19.016573  893814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-409851-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-409851 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:48:19.016654  893814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:48:19.026160  893814 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:48:19.026286  893814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 21:48:19.036127  893814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 21:48:19.049708  893814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:48:19.064947  893814 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 21:48:19.068918  893814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:48:19.079069  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:19.199728  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:19.213792  893814 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 21:48:19.214167  893814 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:48:19.219019  893814 out.go:179] * Verifying Kubernetes components...
	I1120 21:48:19.221920  893814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:48:19.355490  893814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:48:19.371278  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 21:48:19.371349  893814 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 21:48:19.371586  893814 node_ready.go:35] waiting up to 6m0s for node "ha-409851-m04" to be "Ready" ...
	I1120 21:48:19.374629  893814 node_ready.go:49] node "ha-409851-m04" is "Ready"
	I1120 21:48:19.374657  893814 node_ready.go:38] duration metric: took 3.053659ms for node "ha-409851-m04" to be "Ready" ...
	I1120 21:48:19.374671  893814 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:48:19.374745  893814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:48:19.389451  893814 system_svc.go:56] duration metric: took 14.77112ms WaitForService to wait for kubelet
	I1120 21:48:19.389479  893814 kubeadm.go:587] duration metric: took 175.627603ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:48:19.389497  893814 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:48:19.393426  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393518  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393535  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393542  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393547  893814 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:48:19.393552  893814 node_conditions.go:123] node cpu capacity is 2
	I1120 21:48:19.393557  893814 node_conditions.go:105] duration metric: took 4.054434ms to run NodePressure ...
	I1120 21:48:19.393575  893814 start.go:242] waiting for startup goroutines ...
	I1120 21:48:19.393603  893814 start.go:256] writing updated cluster config ...
	I1120 21:48:19.393953  893814 ssh_runner.go:195] Run: rm -f paused
	I1120 21:48:19.397987  893814 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:48:19.398502  893814 kapi.go:59] client config for ha-409851: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/ha-409851/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:48:19.416487  893814 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:48:21.424537  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:23.929996  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:26.423923  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:28.424118  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:30.923501  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:33.423121  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:35.423365  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:37.424719  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:39.923727  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:41.965360  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:44.435238  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:46.923403  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:48.923993  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:51.426397  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:53.924562  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:56.423976  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:48:58.431436  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:00.922387  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:02.923880  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:04.924121  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:07.423527  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:09.424675  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:11.922381  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:13.922686  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:15.923609  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:17.924006  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:20.423097  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	W1120 21:49:22.423996  893814 pod_ready.go:104] pod "coredns-66bc5c9577-pjk6c" is not "Ready", error: <nil>
	I1120 21:49:23.424030  893814 pod_ready.go:94] pod "coredns-66bc5c9577-pjk6c" is "Ready"
	I1120 21:49:23.424063  893814 pod_ready.go:86] duration metric: took 1m4.007542805s for pod "coredns-66bc5c9577-pjk6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.424073  893814 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.430119  893814 pod_ready.go:94] pod "coredns-66bc5c9577-vfsp6" is "Ready"
	I1120 21:49:23.430146  893814 pod_ready.go:86] duration metric: took 6.066348ms for pod "coredns-66bc5c9577-vfsp6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.434497  893814 pod_ready.go:83] waiting for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.442021  893814 pod_ready.go:94] pod "etcd-ha-409851" is "Ready"
	I1120 21:49:23.442059  893814 pod_ready.go:86] duration metric: took 7.532597ms for pod "etcd-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.442070  893814 pod_ready.go:83] waiting for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.453471  893814 pod_ready.go:94] pod "etcd-ha-409851-m02" is "Ready"
	I1120 21:49:23.453510  893814 pod_ready.go:86] duration metric: took 11.432528ms for pod "etcd-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.460522  893814 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.617970  893814 request.go:683] "Waited before sending request" delay="157.293328ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-409851"
	I1120 21:49:23.817544  893814 request.go:683] "Waited before sending request" delay="194.243021ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:23.820786  893814 pod_ready.go:94] pod "kube-apiserver-ha-409851" is "Ready"
	I1120 21:49:23.820814  893814 pod_ready.go:86] duration metric: took 360.266065ms for pod "kube-apiserver-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:23.820823  893814 pod_ready.go:83] waiting for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.018232  893814 request.go:683] "Waited before sending request" delay="197.334029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-409851-m02"
	I1120 21:49:24.217808  893814 request.go:683] "Waited before sending request" delay="195.31208ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:24.220981  893814 pod_ready.go:94] pod "kube-apiserver-ha-409851-m02" is "Ready"
	I1120 21:49:24.221009  893814 pod_ready.go:86] duration metric: took 400.178739ms for pod "kube-apiserver-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.418386  893814 request.go:683] "Waited before sending request" delay="197.22929ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 21:49:24.423065  893814 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.617542  893814 request.go:683] "Waited before sending request" delay="194.266332ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851"
	I1120 21:49:24.818451  893814 request.go:683] "Waited before sending request" delay="195.369435ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:24.821748  893814 pod_ready.go:94] pod "kube-controller-manager-ha-409851" is "Ready"
	I1120 21:49:24.821777  893814 pod_ready.go:86] duration metric: took 398.632324ms for pod "kube-controller-manager-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:24.821787  893814 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.018152  893814 request.go:683] "Waited before sending request" delay="196.257511ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-409851-m02"
	I1120 21:49:25.217440  893814 request.go:683] "Waited before sending request" delay="193.274434ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:25.221099  893814 pod_ready.go:94] pod "kube-controller-manager-ha-409851-m02" is "Ready"
	I1120 21:49:25.221184  893814 pod_ready.go:86] duration metric: took 399.388707ms for pod "kube-controller-manager-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.417592  893814 request.go:683] "Waited before sending request" delay="196.294697ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1120 21:49:25.421901  893814 pod_ready.go:83] waiting for pod "kube-proxy-4qqxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.618261  893814 request.go:683] "Waited before sending request" delay="196.198417ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qqxh"
	I1120 21:49:25.818227  893814 request.go:683] "Waited before sending request" delay="195.266861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:25.822845  893814 pod_ready.go:94] pod "kube-proxy-4qqxh" is "Ready"
	I1120 21:49:25.822876  893814 pod_ready.go:86] duration metric: took 400.891774ms for pod "kube-proxy-4qqxh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:25.822887  893814 pod_ready.go:83] waiting for pod "kube-proxy-pz7vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.018147  893814 request.go:683] "Waited before sending request" delay="195.181839ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz7vt"
	I1120 21:49:26.218218  893814 request.go:683] "Waited before sending request" delay="194.325204ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:26.221718  893814 pod_ready.go:94] pod "kube-proxy-pz7vt" is "Ready"
	I1120 21:49:26.221756  893814 pod_ready.go:86] duration metric: took 398.861103ms for pod "kube-proxy-pz7vt" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.221767  893814 pod_ready.go:83] waiting for pod "kube-proxy-xnhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.418209  893814 request.go:683] "Waited before sending request" delay="196.333755ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnhl6"
	I1120 21:49:26.618151  893814 request.go:683] "Waited before sending request" delay="196.349344ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m04"
	I1120 21:49:26.623181  893814 pod_ready.go:94] pod "kube-proxy-xnhl6" is "Ready"
	I1120 21:49:26.623210  893814 pod_ready.go:86] duration metric: took 401.436889ms for pod "kube-proxy-xnhl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:26.817459  893814 request.go:683] "Waited before sending request" delay="194.131676ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1120 21:49:26.821013  893814 pod_ready.go:83] waiting for pod "kube-scheduler-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.018492  893814 request.go:683] "Waited before sending request" delay="197.322386ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851"
	I1120 21:49:27.217513  893814 request.go:683] "Waited before sending request" delay="190.181719ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851"
	I1120 21:49:27.226443  893814 pod_ready.go:94] pod "kube-scheduler-ha-409851" is "Ready"
	I1120 21:49:27.226520  893814 pod_ready.go:86] duration metric: took 405.47524ms for pod "kube-scheduler-ha-409851" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.226546  893814 pod_ready.go:83] waiting for pod "kube-scheduler-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:49:27.417983  893814 request.go:683] "Waited before sending request" delay="191.325659ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851-m02"
	I1120 21:49:27.618140  893814 request.go:683] "Waited before sending request" delay="196.249535ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:27.817620  893814 request.go:683] "Waited before sending request" delay="90.393989ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-409851-m02"
	I1120 21:49:28.018196  893814 request.go:683] "Waited before sending request" delay="197.189707ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:28.417767  893814 request.go:683] "Waited before sending request" delay="186.33455ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	I1120 21:49:28.817959  893814 request.go:683] "Waited before sending request" delay="87.275796ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-409851-m02"
	W1120 21:49:29.233343  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:31.233779  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:33.234413  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:35.733284  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:38.233049  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:40.233361  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:42.235442  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:44.734815  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:47.232729  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:49.233113  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:51.234068  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:53.732962  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:56.233319  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:49:58.734472  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:01.234009  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:03.234832  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:05.733469  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:08.234179  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:10.735546  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:12.735872  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:14.736374  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:16.740445  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:19.233806  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:21.733741  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:23.735456  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:26.232453  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:28.233317  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:30.735024  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:32.735868  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:35.234232  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:37.734207  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:40.234052  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:42.240134  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:44.733059  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:46.733334  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:48.738389  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:51.233067  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:53.234660  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:55.733852  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:50:57.734484  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:00.249903  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:02.732606  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:04.736105  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:07.233350  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:09.733211  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:11.733392  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:14.234536  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:16.732259  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:18.735892  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:20.735996  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:23.234680  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:25.733375  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:27.733961  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:29.735523  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:32.236382  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:34.733336  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:36.733744  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:38.734442  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:40.734588  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:42.734796  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:44.735137  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:46.736111  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:49.233632  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:51.733070  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:53.734822  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:56.233800  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:51:58.234379  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:00.264529  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:02.742360  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:05.233819  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:07.733077  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:09.734867  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:12.233625  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:14.733387  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:16.734342  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	W1120 21:52:18.734797  893814 pod_ready.go:104] pod "kube-scheduler-ha-409851-m02" is not "Ready", error: <nil>
	I1120 21:52:19.398473  893814 pod_ready.go:86] duration metric: took 2m52.171896252s for pod "kube-scheduler-ha-409851-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:52:19.398508  893814 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-scheduler" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 21:52:19.398524  893814 pod_ready.go:40] duration metric: took 4m0.000499103s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:52:19.401528  893814 out.go:203] 
	W1120 21:52:19.404511  893814 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 21:52:19.407414  893814 out.go:203] 
	
	
	==> CRI-O <==
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.811470727Z" level=info msg="Running pod sandbox: kube-system/kindnet-7hmbf/POD" id=28bea4ad-45c7-4ae7-92e7-809ca92ae1f4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.811536598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.815250925Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=28bea4ad-45c7-4ae7-92e7-809ca92ae1f4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.818484951Z" level=info msg="Ran pod sandbox b2d79927049c127d9e5f12aca58d594c8f613b055eb5c07f7c0ebe2467920bdb with infra container: kube-system/kindnet-7hmbf/POD" id=28bea4ad-45c7-4ae7-92e7-809ca92ae1f4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.820409438Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=de13f0e7-3c4a-42d5-9c8d-3a3bc426d7fd name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.826704318Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f2666544-b5e7-4f59-a2f3-144082db7373 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.828605429Z" level=info msg="Creating container: kube-system/kindnet-7hmbf/kindnet-cni" id=fa91b507-57b0-4587-9812-2928e0280a62 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.829288957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.834469699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.835169227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.85382609Z" level=info msg="Created container bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454: kube-system/kindnet-7hmbf/kindnet-cni" id=fa91b507-57b0-4587-9812-2928e0280a62 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.854825659Z" level=info msg="Starting container: bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454" id=c468e3c9-d4e5-493c-bfd8-7edc351197ab name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:48:45 ha-409851 crio[668]: time="2025-11-20T21:48:45.859192598Z" level=info msg="Started container" PID=1405 containerID=bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454 description=kube-system/kindnet-7hmbf/kindnet-cni id=c468e3c9-d4e5-493c-bfd8-7edc351197ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=b2d79927049c127d9e5f12aca58d594c8f613b055eb5c07f7c0ebe2467920bdb
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.206856782Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.210460298Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.21049604Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.210517833Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.213977617Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.214129201Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.214171162Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.217329445Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.217362923Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.217385791Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.220578314Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:48:56 ha-409851 crio[668]: time="2025-11-20T21:48:56.220610922Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	bad91fe692656       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               2                   b2d79927049c1       kindnet-7hmbf                       kube-system
	45150399abc60       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   2                   86a0aabe892ba       busybox-7b57f96db7-mgvhj            default
	282f28167fcd8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       3                   cf9b9178a22be       storage-provisioner                 kube-system
	283abd913ff4d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                2                   51827a0562eaa       kube-proxy-4qqxh                    kube-system
	3064e4d2cac3e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   2                   f1efa47298912       coredns-66bc5c9577-pjk6c            kube-system
	474e5b9d1f070       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   2                   fb899ea594eab       coredns-66bc5c9577-vfsp6            kube-system
	5ccb03706c0f4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   7                   5ac2d22e0c15f       kube-controller-manager-ha-409851   kube-system
	53d8cbac386fc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   6                   5ac2d22e0c15f       kube-controller-manager-ha-409851   kube-system
	21eb6c12eb9d6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            4                   11a0f49f5bc02       kube-apiserver-ha-409851            kube-system
	e758e4601a79a       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  2                   276d004d64a0f       kube-vip-ha-409851                  kube-system
	bf7fd293f188a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            2                   251d917d7ecb8       kube-scheduler-ha-409851            kube-system
	29879cb03dd0a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      2                   44edbb77d8632       etcd-ha-409851                      kube-system
	d2a9e01261d92       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Exited              kube-apiserver            3                   11a0f49f5bc02       kube-apiserver-ha-409851            kube-system
	
	
	==> coredns [3064e4d2cac3e067a0a0ba1353e3b89a5da11e7e5a320f683346febeadfbb73a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40971 - 38824 "HINFO IN 3995400066811168115.5738602718581230250. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004050865s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [474e5b9d1f07007a252c22fb0e9172e8fd3235037aecc813a1d66128aa8e0d26] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46282 - 18255 "HINFO IN 2304188649282025477.3571330681415947141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021110391s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-409851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_32_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:32:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:53:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:53:42 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:53:42 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:53:42 +0000   Thu, 20 Nov 2025 21:32:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:53:42 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-409851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                1f114e92-c1bf-4c10-9121-0a6c185877b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-mgvhj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-66bc5c9577-pjk6c             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 coredns-66bc5c9577-vfsp6             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 etcd-ha-409851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-7hmbf                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-409851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-409851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-4qqxh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-409851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-409851                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 21m                    kube-proxy       
	  Normal   Starting                 5m5s                   kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Warning  CgroupV1                 21m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     21m (x8 over 21m)      kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)      kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)      kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   Starting                 21m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     21m                    kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   Starting                 21m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 21m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  21m                    kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m                    kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           21m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           20m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   NodeReady                20m                    kubelet          Node ha-409851 status is now: NodeReady
	  Normal   RegisteredNode           19m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)      kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   NodeHasSufficientMemory  7m29s (x8 over 7m29s)  kubelet          Node ha-409851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m29s (x8 over 7m29s)  kubelet          Node ha-409851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m29s (x8 over 7m29s)  kubelet          Node ha-409851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m37s                  node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           5m4s                   node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	  Normal   RegisteredNode           50s                    node-controller  Node ha-409851 event: Registered Node ha-409851 in Controller
	
	
	Name:               ha-409851-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_33_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:53:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:53:44 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:53:44 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:53:44 +0000   Thu, 20 Nov 2025 21:33:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:53:44 +0000   Thu, 20 Nov 2025 21:34:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-409851-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3904cc8f-d8d1-4880-8dca-3fb5e1048dff
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hqh2f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-409851-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         20m
	  kube-system                 kindnet-56lr8                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-apiserver-ha-409851-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-409851-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-pz7vt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-409851-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-409851-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 20m                    kube-proxy       
	  Normal   Starting                 5m6s                   kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   RegisteredNode           20m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           20m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           19m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           15m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   Starting                 7m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m26s (x8 over 7m26s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m26s (x8 over 7m26s)  kubelet          Node ha-409851-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m26s (x8 over 7m26s)  kubelet          Node ha-409851-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        6m26s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m37s                  node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           5m4s                   node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	  Normal   RegisteredNode           50s                    node-controller  Node ha-409851-m02 event: Registered Node ha-409851-m02 in Controller
	
	
	Name:               ha-409851-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_35_59_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:53:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:53:42 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:53:42 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:53:42 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:53:42 +0000   Thu, 20 Nov 2025 21:41:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-409851-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                2c1b4976-2a70-4f78-8646-ed9804d613b4
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-snllw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kindnet-2d5r9               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-proxy-xnhl6            0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 5m16s                  kube-proxy       
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    17m (x3 over 17m)      kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  17m (x3 over 17m)      kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m (x3 over 17m)      kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-409851-m04 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   NodeNotReady             13m                    node-controller  Node ha-409851-m04 status is now: NodeNotReady
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           5m37s                  node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Warning  CgroupV1                 5m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 5m35s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m32s (x8 over 5m35s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m32s (x8 over 5m35s)  kubelet          Node ha-409851-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m32s (x8 over 5m35s)  kubelet          Node ha-409851-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m4s                   node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	  Normal   RegisteredNode           50s                    node-controller  Node ha-409851-m04 event: Registered Node ha-409851-m04 in Controller
	
	
	Name:               ha-409851-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-409851-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-409851
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T21_53_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:52:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-409851-m05
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:53:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:53:44 +0000   Thu, 20 Nov 2025 21:52:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:53:44 +0000   Thu, 20 Nov 2025 21:52:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:53:44 +0000   Thu, 20 Nov 2025 21:52:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:53:44 +0000   Thu, 20 Nov 2025 21:53:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-409851-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                e40a3d90-8ac2-411c-a847-61701d9a9f0a
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-409851-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         46s
	  kube-system                 kindnet-9gnd7                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      49s
	  kube-system                 kube-apiserver-ha-409851-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-ha-409851-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-proxy-jdmv6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-scheduler-ha-409851-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-vip-ha-409851-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        45s   kube-proxy       
	  Normal  RegisteredNode  47s   node-controller  Node ha-409851-m05 event: Registered Node ha-409851-m05 in Controller
	  Normal  RegisteredNode  45s   node-controller  Node ha-409851-m05 event: Registered Node ha-409851-m05 in Controller
	  Normal  RegisteredNode  44s   node-controller  Node ha-409851-m05 event: Registered Node ha-409851-m05 in Controller
	
	
	==> dmesg <==
	[  +2.035111] overlayfs: idmapped layers are currently not supported
	[Nov20 19:54] overlayfs: idmapped layers are currently not supported
	[Nov20 19:55] overlayfs: idmapped layers are currently not supported
	[Nov20 19:56] overlayfs: idmapped layers are currently not supported
	[Nov20 19:57] overlayfs: idmapped layers are currently not supported
	[Nov20 19:58] overlayfs: idmapped layers are currently not supported
	[Nov20 19:59] overlayfs: idmapped layers are currently not supported
	[Nov20 20:04] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:08] kauditd_printk_skb: 8 callbacks suppressed
	[Nov20 21:11] overlayfs: idmapped layers are currently not supported
	[Nov20 21:17] overlayfs: idmapped layers are currently not supported
	[Nov20 21:18] overlayfs: idmapped layers are currently not supported
	[Nov20 21:32] overlayfs: idmapped layers are currently not supported
	[Nov20 21:33] overlayfs: idmapped layers are currently not supported
	[Nov20 21:34] overlayfs: idmapped layers are currently not supported
	[Nov20 21:36] overlayfs: idmapped layers are currently not supported
	[Nov20 21:37] overlayfs: idmapped layers are currently not supported
	[Nov20 21:38] overlayfs: idmapped layers are currently not supported
	[  +3.034217] overlayfs: idmapped layers are currently not supported
	[Nov20 21:39] overlayfs: idmapped layers are currently not supported
	[Nov20 21:41] overlayfs: idmapped layers are currently not supported
	[Nov20 21:46] overlayfs: idmapped layers are currently not supported
	[  +2.922279] overlayfs: idmapped layers are currently not supported
	[Nov20 21:48] overlayfs: idmapped layers are currently not supported
	[Nov20 21:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [29879cb03dd0a43326e4e6e94a9bec4cf49f8356cb3cf208c0a562ed783bb2de] <==
	{"level":"error","ts":"2025-11-20T21:52:49.847814Z","caller":"etcdserver/server.go:1585","msg":"rejecting promote learner: learner is not ready","learner-ready-percent":0,"ready-percent-threshold":0.9,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).isLearnerReady\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1585\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).mayPromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1526\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).promoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1498\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).PromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1450\ngo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*peerMemberPromoteHandler).ServeHTTP\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/peer.go:140\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2747\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3210\nnet/http.(*conn).serve\n\tnet/http/ser
ver.go:2092"}
	{"level":"warn","ts":"2025-11-20T21:52:49.847965Z","caller":"etcdhttp/peer.go:152","msg":"failed to promote a member","member-id":"aa4640d43bdb8334","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-11-20T21:52:49.897897Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":8216576,"size":"8.2 MB"}
	{"level":"error","ts":"2025-11-20T21:52:50.345041Z","caller":"etcdserver/server.go:1585","msg":"rejecting promote learner: learner is not ready","learner-ready-percent":0,"ready-percent-threshold":0.9,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).isLearnerReady\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1585\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).mayPromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1526\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).promoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1498\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).PromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1450\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.(*ClusterServer).MemberPromote\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/member.go:101\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler.func1\n\tgo.etcd.io/etcd/api/v3@v3.6.4/etcdserverpb/rpc.pb.go:7432\ngo.etcd.io/etcd/server/v3/etcdserv
er/api/v3rpc.Server.(*ServerMetrics).UnaryServerInterceptor.UnaryServerInterceptor.func12\n\tgithub.com/grpc-ecosystem/go-grpc-middleware/v2@v2.1.0/interceptors/server.go:22\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newUnaryInterceptor.func5\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:74\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newLogUnaryInterceptor.func4\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:81\ngoogle.golang.org/grpc.NewServer.chainUnaryServerInterceptors.chainUnaryInterceptors.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1208\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler\n\tgo.etcd.io/etcd/api/v3@v3.6.4/etcdserverpb/rpc.pb.go:7434\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\tgoogle.golang.org/grpc@v1.
71.1/server.go:1405\ngoogle.golang.org/grpc.(*Server).handleStream\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1815\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1035"}
	{"level":"info","ts":"2025-11-20T21:52:50.642384Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":5359,"remote-peer-id":"aa4640d43bdb8334","bytes":8225726,"size":"8.2 MB"}
	{"level":"info","ts":"2025-11-20T21:52:50.849029Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(9917626278389854547 12269565515098981172 12593026477526642892)"}
	{"level":"info","ts":"2025-11-20T21:52:50.849227Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"aa4640d43bdb8334"}
	{"level":"info","ts":"2025-11-20T21:52:50.849284Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"aa4640d43bdb8334"}
	{"level":"warn","ts":"2025-11-20T21:52:50.873545Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"aa4640d43bdb8334","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:52:50.873820Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"aa4640d43bdb8334","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:52:51.036266Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"aa4640d43bdb8334","error":"failed to write aa4640d43bdb8334 on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:44814: write: broken pipe)"}
	{"level":"warn","ts":"2025-11-20T21:52:51.036368Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"aa4640d43bdb8334"}
	{"level":"warn","ts":"2025-11-20T21:52:51.124388Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"aa4640d43bdb8334"}
	{"level":"info","ts":"2025-11-20T21:52:51.199952Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"aa4640d43bdb8334"}
	{"level":"info","ts":"2025-11-20T21:52:51.299086Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"aa4640d43bdb8334","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-20T21:52:51.299156Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"aa4640d43bdb8334"}
	{"level":"info","ts":"2025-11-20T21:52:51.300842Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"aa4640d43bdb8334"}
	{"level":"info","ts":"2025-11-20T21:52:51.408140Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"aa4640d43bdb8334","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-20T21:52:51.408215Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"aa4640d43bdb8334"}
	{"level":"info","ts":"2025-11-20T21:52:51.589189Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"aa4640d43bdb8334"}
	{"level":"info","ts":"2025-11-20T21:52:59.907266Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-20T21:53:06.027369Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-20T21:53:20.643153Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"aa4640d43bdb8334","bytes":8225726,"size":"8.2 MB","took":"31.132791685s"}
	{"level":"warn","ts":"2025-11-20T21:53:48.513130Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.409736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:371188"}
	{"level":"info","ts":"2025-11-20T21:53:48.513249Z","caller":"traceutil/trace.go:172","msg":"trace[548153904] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:4793; }","duration":"126.508141ms","start":"2025-11-20T21:53:48.386688Z","end":"2025-11-20T21:53:48.513196Z","steps":["trace[548153904] 'range keys from bolt db'  (duration: 125.209742ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:53:48 up  4:35,  0 user,  load average: 1.63, 1.26, 1.35
	Linux ha-409851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bad91fe692656c0f3819f594818f4a30e845a6233f1cbcdcb9ece16be02c1454] <==
	I1120 21:53:16.206934       1 main.go:324] Node ha-409851-m05 has CIDR [10.244.2.0/24] 
	I1120 21:53:26.207337       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:53:26.207455       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:53:26.207660       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:53:26.207677       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:53:26.207738       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1120 21:53:26.207751       1 main.go:324] Node ha-409851-m05 has CIDR [10.244.2.0/24] 
	I1120 21:53:26.207805       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:53:26.207818       1 main.go:301] handling current node
	I1120 21:53:36.205440       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:53:36.205474       1 main.go:301] handling current node
	I1120 21:53:36.205489       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:53:36.205496       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:53:36.205696       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:53:36.205711       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:53:36.205835       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1120 21:53:36.205845       1 main.go:324] Node ha-409851-m05 has CIDR [10.244.2.0/24] 
	I1120 21:53:46.205362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 21:53:46.205393       1 main.go:301] handling current node
	I1120 21:53:46.205426       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 21:53:46.205434       1 main.go:324] Node ha-409851-m02 has CIDR [10.244.1.0/24] 
	I1120 21:53:46.205608       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 21:53:46.205617       1 main.go:324] Node ha-409851-m04 has CIDR [10.244.3.0/24] 
	I1120 21:53:46.213225       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1120 21:53:46.213263       1 main.go:324] Node ha-409851-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [21eb6c12eb9d6c645ff79035e852942fc36d120d38e6634372d84d1fff4b1c3a] <==
	I1120 21:48:05.164517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:48:05.251597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:48:05.267215       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:48:05.273069       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:48:05.273181       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:48:05.301644       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:48:05.303022       1 policy_source.go:240] refreshing policies
	I1120 21:48:05.343504       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:48:05.343769       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:48:05.344234       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:48:05.350900       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1120 21:48:05.361480       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:48:05.362670       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:48:05.370720       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:48:05.362690       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:48:11.243570       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 21:48:11.243643       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:48:11.543897       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	W1120 21:48:11.986847       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1120 21:48:11.988628       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:48:11.996638       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:48:31.545364       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:48:44.311228       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:48:46.301552       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:49:23.280882       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [d2a9e01261d927422239ac6d8aae4c4810c85777bd6fc37ddc5126a51deff4dd] <==
	{"level":"warn","ts":"2025-11-20T21:47:25.675429Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016b65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675510Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675578Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002cd61e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675620Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400212da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675648Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013d9860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675671Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000797860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675698Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400224d680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675596Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40007970e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675739Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019532c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675766Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016b6d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675801Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675829Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400276c780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675854Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675804Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013d83c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675908Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675946Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.675911Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b40960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-20T21:47:25.827032Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400212da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1120 21:47:25.827154       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1120 21:47:25.827227       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1120 21:47:25.828931       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1120 21:47:25.828993       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1120 21:47:25.830257       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.94329ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	{"level":"warn","ts":"2025-11-20T21:47:26.843128Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400212da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	F1120 21:47:27.272727       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43] <==
	I1120 21:47:44.271187       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:47:45.887863       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1120 21:47:45.887899       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:47:45.889312       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1120 21:47:45.889482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1120 21:47:45.889741       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1120 21:47:45.889803       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 21:47:55.905939       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [5ccb03706c0f435e1a09ff9e7ebbe19aee8f89c6e7467182aa27e3874e6c323d] <==
	I1120 21:48:44.201695       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:48:44.201862       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:48:44.201975       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m04"
	I1120 21:48:44.202045       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851"
	I1120 21:48:44.202137       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m02"
	I1120 21:48:44.202200       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 21:48:44.213792       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:48:44.217890       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:48:44.217972       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:48:44.218002       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:48:44.234704       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:49:23.353198       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v6gm\": the object has been modified; please apply your changes to the latest version and try again"
	I1120 21:49:23.353878       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"21992042-f6b2-485a-bd9b-decc3a3d6f7e", APIVersion:"v1", ResourceVersion:"294", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v6gm": the object has been modified; please apply your changes to the latest version and try again
	E1120 21:49:23.376944       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1120 21:49:23.392884       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9v6gm\": the object has been modified; please apply your changes to the latest version and try again"
	I1120 21:49:23.393588       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"21992042-f6b2-485a-bd9b-decc3a3d6f7e", APIVersion:"v1", ResourceVersion:"294", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9v6gm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9v6gm": the object has been modified; please apply your changes to the latest version and try again
	E1120 21:52:58.909883       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-4z8n8 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-4z8n8\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1120 21:52:58.925505       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-4z8n8 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-4z8n8\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1120 21:52:59.564594       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-409851-m05\" does not exist"
	I1120 21:52:59.575407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-409851-m04"
	I1120 21:52:59.585136       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-409851-m05" podCIDRs=["10.244.2.0/24"]
	E1120 21:52:59.891841       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"dad7ca2c-c0d5-4a01-8524-a8a5798417dd\", ResourceVersion:\"3912\", Generation:1, CreationTimestamp:time.Date(2025, time.November, 20, 21, 32, 33, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\
\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath
\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001774dc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name
:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400022f8f0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolu
meClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400022fa70), EmptyDir:(*v1.EmptyDirVolumeSourc
e)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portwo
rxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400022fad0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20250512-df8de77b\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0x4001cfc420)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVa
rSource)(0x4001cfc450)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.Volum
eMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0x4002073080), Stdin:false, StdinOnce:false
, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0x4002231b10), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40018418c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(
nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40022da340)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4002231b4c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="Unhandl
edError"
	I1120 21:53:04.265574       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-409851-m05"
	I1120 21:53:44.211986       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-409851-m04"
	
	
	==> kube-proxy [283abd913ff4d5c1081b76097b71e66eb996220513fadc607f8f68cd50071785] <==
	I1120 21:48:42.954042       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:48:43.040713       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:48:43.141728       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:48:43.141763       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 21:48:43.141860       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:48:43.160133       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:48:43.160188       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:48:43.163678       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:48:43.163975       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:48:43.164011       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:48:43.168077       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:48:43.168182       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:48:43.168489       1 config.go:200] "Starting service config controller"
	I1120 21:48:43.168532       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:48:43.169345       1 config.go:309] "Starting node config controller"
	I1120 21:48:43.169359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:48:43.169367       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:48:43.172283       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:48:43.172357       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:48:43.268742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:48:43.268898       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:48:43.272772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bf7fd293f188a4c3116512ca8739e3ae57f6b6ac6e8e5e7a7e493804caba0ede] <==
	E1120 21:47:59.593992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:48:00.869852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:48:01.061027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:48:01.453651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:48:03.292850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:48:03.733908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:48:03.942583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:48:04.337599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:48:05.178246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:52:59.711820       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jdmv6\": pod kube-proxy-jdmv6 is already assigned to node \"ha-409851-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jdmv6" node="ha-409851-m05"
	E1120 21:52:59.711886       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 55fb3c0d-e8ef-4b6a-8655-627158cbae52(kube-system/kube-proxy-jdmv6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-jdmv6"
	E1120 21:52:59.711908       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jdmv6\": pod kube-proxy-jdmv6 is already assigned to node \"ha-409851-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-jdmv6"
	I1120 21:52:59.714177       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jdmv6" node="ha-409851-m05"
	E1120 21:52:59.716401       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9gnd7\": pod kindnet-9gnd7 is already assigned to node \"ha-409851-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-9gnd7" node="ha-409851-m05"
	E1120 21:52:59.716446       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b8e6d4af-3bc1-4d68-b5bc-d9e99bd2efa1(kube-system/kindnet-9gnd7) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-9gnd7"
	E1120 21:52:59.716465       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9gnd7\": pod kindnet-9gnd7 is already assigned to node \"ha-409851-m05\"" logger="UnhandledError" pod="kube-system/kindnet-9gnd7"
	I1120 21:52:59.717708       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9gnd7" node="ha-409851-m05"
	E1120 21:52:59.749134       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pmn7l\": pod kindnet-pmn7l is already assigned to node \"ha-409851-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-pmn7l" node="ha-409851-m05"
	E1120 21:52:59.749185       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d74730e8-4725-4625-8985-f23fe3db2afb(kube-system/kindnet-pmn7l) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-pmn7l"
	E1120 21:52:59.749206       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pmn7l\": pod kindnet-pmn7l is already assigned to node \"ha-409851-m05\"" logger="UnhandledError" pod="kube-system/kindnet-pmn7l"
	I1120 21:52:59.750444       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pmn7l" node="ha-409851-m05"
	E1120 21:52:59.944093       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bt945\": pod kube-proxy-bt945 is already assigned to node \"ha-409851-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bt945" node="ha-409851-m05"
	E1120 21:52:59.944173       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bt945\": pod kube-proxy-bt945 is already assigned to node \"ha-409851-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-bt945"
	E1120 21:52:59.949806       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hnmrd\": pod kindnet-hnmrd is already assigned to node \"ha-409851-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-hnmrd" node="ha-409851-m05"
	E1120 21:52:59.949887       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hnmrd\": pod kindnet-hnmrd is already assigned to node \"ha-409851-m05\"" logger="UnhandledError" pod="kube-system/kindnet-hnmrd"
	
	
	==> kubelet <==
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.102858     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-7hmbf\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="562945a4-84ec-46c8-b77e-abdd9d577c9c" pod="kube-system/kindnet-7hmbf"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.116790     805 kubelet_node_status.go:124] "Node was previously registered" node="ha-409851"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.116930     805 kubelet_node_status.go:78] "Successfully registered node" node="ha-409851"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.116963     805 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:48:05 ha-409851 kubelet[805]: I1120 21:48:05.117831     805 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.123111     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"storage-provisioner\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="349c85dc-6341-43ab-b388-8734d72e3040" pod="kube-system/storage-provisioner"
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.167806     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-vip-ha-409851\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="6f4588d400318593d47cec16914af85c" pod="kube-system/kube-vip-ha-409851"
	Nov 20 21:48:05 ha-409851 kubelet[805]: E1120 21:48:05.254640     805 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-4qqxh\" is forbidden: User \"system:node:ha-409851\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-409851' and this object" podUID="2f7683fa-0199-444f-bcf4-42666203c1fa" pod="kube-system/kube-proxy-4qqxh"
	Nov 20 21:48:14 ha-409851 kubelet[805]: I1120 21:48:14.806712     805 scope.go:117] "RemoveContainer" containerID="53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43"
	Nov 20 21:48:14 ha-409851 kubelet[805]: E1120 21:48:14.806952     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-409851_kube-system(69faa2bc5061adf58d981ecf300e1cf6)\"" pod="kube-system/kube-controller-manager-ha-409851" podUID="69faa2bc5061adf58d981ecf300e1cf6"
	Nov 20 21:48:19 ha-409851 kubelet[805]: E1120 21:48:19.826466     805 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/53ae0ada8ee6b87a83c12c535b4145c039ace4d83202156f4f2fa970dd2c3e8a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/53ae0ada8ee6b87a83c12c535b4145c039ace4d83202156f4f2fa970dd2c3e8a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-409851_69faa2bc5061adf58d981ecf300e1cf6/kube-controller-manager/4.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-409851_69faa2bc5061adf58d981ecf300e1cf6/kube-controller-manager/4.log: no such file or directory
	Nov 20 21:48:26 ha-409851 kubelet[805]: I1120 21:48:26.807409     805 scope.go:117] "RemoveContainer" containerID="53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43"
	Nov 20 21:48:26 ha-409851 kubelet[805]: E1120 21:48:26.807617     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-409851_kube-system(69faa2bc5061adf58d981ecf300e1cf6)\"" pod="kube-system/kube-controller-manager-ha-409851" podUID="69faa2bc5061adf58d981ecf300e1cf6"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.761938     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jvsfx], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-66bc5c9577-vfsp6" podUID="09c1e0dd-0208-4f69-aac9-670197f4c848"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.767157     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cg4c6], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/coredns-66bc5c9577-pjk6c" podUID="ad25e130-cf9b-4f5e-b082-23c452bd1c5c"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.767157     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-rjfpv], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/kube-proxy-4qqxh" podUID="2f7683fa-0199-444f-bcf4-42666203c1fa"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.767309     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ndpsr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/kindnet-7hmbf" podUID="562945a4-84ec-46c8-b77e-abdd9d577c9c"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.768337     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jlbcp], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/storage-provisioner" podUID="349c85dc-6341-43ab-b388-8734d72e3040"
	Nov 20 21:48:30 ha-409851 kubelet[805]: E1120 21:48:30.768345     805 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-t5g2b], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="default/busybox-7b57f96db7-mgvhj" podUID="79106a87-339a-4b68-ad4e-12ef6b0b03ca"
	Nov 20 21:48:34 ha-409851 kubelet[805]: I1120 21:48:34.138084     805 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:48:39 ha-409851 kubelet[805]: I1120 21:48:39.807902     805 scope.go:117] "RemoveContainer" containerID="53d8cbac386fcf080bc46cbd7313d768bc57e98f0f718781af430c7158f25d43"
	Nov 20 21:48:41 ha-409851 kubelet[805]: W1120 21:48:41.897097     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-fb899ea594eab05a10c91ed517e7df9f9aa7e6bbc83170c8c51036525a7aed49 WatchSource:0}: Error finding container fb899ea594eab05a10c91ed517e7df9f9aa7e6bbc83170c8c51036525a7aed49: Status 404 returned error can't find the container with id fb899ea594eab05a10c91ed517e7df9f9aa7e6bbc83170c8c51036525a7aed49
	Nov 20 21:48:41 ha-409851 kubelet[805]: W1120 21:48:41.904639     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-f1efa472989129538dbd146ad9e60aeb226bfae7468050404be039e9aa155b4b WatchSource:0}: Error finding container f1efa472989129538dbd146ad9e60aeb226bfae7468050404be039e9aa155b4b: Status 404 returned error can't find the container with id f1efa472989129538dbd146ad9e60aeb226bfae7468050404be039e9aa155b4b
	Nov 20 21:48:42 ha-409851 kubelet[805]: W1120 21:48:42.819704     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-51827a0562eaacba39d1f56d5c992f9b9551bbe843e39c04d20a809fcd02d0ac WatchSource:0}: Error finding container 51827a0562eaacba39d1f56d5c992f9b9551bbe843e39c04d20a809fcd02d0ac: Status 404 returned error can't find the container with id 51827a0562eaacba39d1f56d5c992f9b9551bbe843e39c04d20a809fcd02d0ac
	Nov 20 21:48:43 ha-409851 kubelet[805]: W1120 21:48:43.900976     805 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio-86a0aabe892baf40a6d3f1f4805dc511b99e67d4fc88a0ce7ab2313ee6a4c7ce WatchSource:0}: Error finding container 86a0aabe892baf40a6d3f1f4805dc511b99e67d4fc88a0ce7ab2313ee6a4c7ce: Status 404 returned error can't find the container with id 86a0aabe892baf40a6d3f1f4805dc511b99e67d4fc88a0ce7ab2313ee6a4c7ce
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-409851 -n ha-409851
helpers_test.go:269: (dbg) Run:  kubectl --context ha-409851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.20s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-515527 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-515527 --output=json --user=testUser: exit status 80 (1.851739335s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b43d8657-10e3-464d-a72e-ac8a9733a10a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-515527 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"98f4cf1b-4203-4bca-846e-c822af479b40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-20T21:55:25Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"eb0aedfc-bcce-4d92-b16e-b8dbdc0c4a4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-515527 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.85s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-515527 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-515527 --output=json --user=testUser: exit status 80 (1.710679418s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"77ceefbc-bcc6-4b25-931e-44ec41c31d22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-515527 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5352f9d2-5b42-4597-83f5-dc0161ee469a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-20T21:55:27Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"24a9e08e-8cc4-4ace-91a6-539732fface9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-515527 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.71s)

                                                
                                    
x
+
TestPause/serial/Pause (7.43s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-236741 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-236741 --alsologtostderr -v=5: exit status 80 (2.508994977s)

                                                
                                                
-- stdout --
	* Pausing node pause-236741 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 22:18:51.721351 1002550 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:18:51.722652 1002550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:18:51.722708 1002550 out.go:374] Setting ErrFile to fd 2...
	I1120 22:18:51.722729 1002550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:18:51.723065 1002550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:18:51.723428 1002550 out.go:368] Setting JSON to false
	I1120 22:18:51.723493 1002550 mustload.go:66] Loading cluster: pause-236741
	I1120 22:18:51.724006 1002550 config.go:182] Loaded profile config "pause-236741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:18:51.724544 1002550 cli_runner.go:164] Run: docker container inspect pause-236741 --format={{.State.Status}}
	I1120 22:18:51.755358 1002550 host.go:66] Checking if "pause-236741" exists ...
	I1120 22:18:51.755666 1002550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:18:51.833709 1002550 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 22:18:51.821689825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:18:51.834585 1002550 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-236741 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 22:18:51.837540 1002550 out.go:179] * Pausing node pause-236741 ... 
	I1120 22:18:51.841336 1002550 host.go:66] Checking if "pause-236741" exists ...
	I1120 22:18:51.841692 1002550 ssh_runner.go:195] Run: systemctl --version
	I1120 22:18:51.841740 1002550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:51.864815 1002550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:51.969963 1002550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:18:51.986430 1002550 pause.go:52] kubelet running: true
	I1120 22:18:51.986497 1002550 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:18:52.258092 1002550 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:18:52.258172 1002550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:18:52.343891 1002550 cri.go:89] found id: "281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb"
	I1120 22:18:52.343931 1002550 cri.go:89] found id: "c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499"
	I1120 22:18:52.343947 1002550 cri.go:89] found id: "306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00"
	I1120 22:18:52.343951 1002550 cri.go:89] found id: "a4a604a24a4c32db44f4b62a5104e2347a70864166bb4eba5bf30105c4e13201"
	I1120 22:18:52.343982 1002550 cri.go:89] found id: "c468b960ba6f0f4b556950a20799939d1b5d15055220c3912c73be316d71ea48"
	I1120 22:18:52.343992 1002550 cri.go:89] found id: "1560c64f26dfacbde83eecc300320a5b84c302efea1b1ce06d936589c5c29a96"
	I1120 22:18:52.343996 1002550 cri.go:89] found id: "8ceea0cc240b99fe15d8cac6aacce8187742305096eab5d78f2ca6a5cec87c90"
	I1120 22:18:52.343999 1002550 cri.go:89] found id: "3c387221343fc267293874d0cc25d9f5fba82bd20373e7422a0706579c53966f"
	I1120 22:18:52.344003 1002550 cri.go:89] found id: "9f0c71877dc9b95ffc1e640d923eae9a1f572ce5667f3ce16d8c165e843a5eb3"
	I1120 22:18:52.344010 1002550 cri.go:89] found id: "58052be823cbf5d2cb1b7278e73604249f66a05273becbd8e1db08315c2828ad"
	I1120 22:18:52.344018 1002550 cri.go:89] found id: "7e36379b8c3d46ef6b0a620644bc9c41cc65c59a2f47b7a11d658e4590de5911"
	I1120 22:18:52.344021 1002550 cri.go:89] found id: "c3511d0b771763187a5bc3795736cf83741f9ce4ddc7e64d0cecd65f6e18a4db"
	I1120 22:18:52.344025 1002550 cri.go:89] found id: "6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66"
	I1120 22:18:52.344028 1002550 cri.go:89] found id: "9e252ff958f22c644f163926d6bf7b361937414d14e4ab60cf3323e25776ac33"
	I1120 22:18:52.344031 1002550 cri.go:89] found id: ""
	I1120 22:18:52.344098 1002550 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:18:52.358296 1002550 retry.go:31] will retry after 136.04331ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:18:52Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:18:52.494628 1002550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:18:52.508204 1002550 pause.go:52] kubelet running: false
	I1120 22:18:52.508270 1002550 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:18:52.646147 1002550 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:18:52.646290 1002550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:18:52.716134 1002550 cri.go:89] found id: "281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb"
	I1120 22:18:52.716158 1002550 cri.go:89] found id: "c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499"
	I1120 22:18:52.716164 1002550 cri.go:89] found id: "306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00"
	I1120 22:18:52.716167 1002550 cri.go:89] found id: "a4a604a24a4c32db44f4b62a5104e2347a70864166bb4eba5bf30105c4e13201"
	I1120 22:18:52.716170 1002550 cri.go:89] found id: "c468b960ba6f0f4b556950a20799939d1b5d15055220c3912c73be316d71ea48"
	I1120 22:18:52.716174 1002550 cri.go:89] found id: "1560c64f26dfacbde83eecc300320a5b84c302efea1b1ce06d936589c5c29a96"
	I1120 22:18:52.716177 1002550 cri.go:89] found id: "8ceea0cc240b99fe15d8cac6aacce8187742305096eab5d78f2ca6a5cec87c90"
	I1120 22:18:52.716180 1002550 cri.go:89] found id: "3c387221343fc267293874d0cc25d9f5fba82bd20373e7422a0706579c53966f"
	I1120 22:18:52.716183 1002550 cri.go:89] found id: "9f0c71877dc9b95ffc1e640d923eae9a1f572ce5667f3ce16d8c165e843a5eb3"
	I1120 22:18:52.716214 1002550 cri.go:89] found id: "58052be823cbf5d2cb1b7278e73604249f66a05273becbd8e1db08315c2828ad"
	I1120 22:18:52.716218 1002550 cri.go:89] found id: "7e36379b8c3d46ef6b0a620644bc9c41cc65c59a2f47b7a11d658e4590de5911"
	I1120 22:18:52.716222 1002550 cri.go:89] found id: "c3511d0b771763187a5bc3795736cf83741f9ce4ddc7e64d0cecd65f6e18a4db"
	I1120 22:18:52.716225 1002550 cri.go:89] found id: "6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66"
	I1120 22:18:52.716228 1002550 cri.go:89] found id: "9e252ff958f22c644f163926d6bf7b361937414d14e4ab60cf3323e25776ac33"
	I1120 22:18:52.716238 1002550 cri.go:89] found id: ""
	I1120 22:18:52.716304 1002550 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:18:52.728003 1002550 retry.go:31] will retry after 504.909925ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:18:52Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:18:53.233851 1002550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:18:53.247923 1002550 pause.go:52] kubelet running: false
	I1120 22:18:53.247985 1002550 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:18:53.386351 1002550 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:18:53.386506 1002550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:18:53.454197 1002550 cri.go:89] found id: "281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb"
	I1120 22:18:53.454221 1002550 cri.go:89] found id: "c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499"
	I1120 22:18:53.454227 1002550 cri.go:89] found id: "306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00"
	I1120 22:18:53.454231 1002550 cri.go:89] found id: "a4a604a24a4c32db44f4b62a5104e2347a70864166bb4eba5bf30105c4e13201"
	I1120 22:18:53.454234 1002550 cri.go:89] found id: "c468b960ba6f0f4b556950a20799939d1b5d15055220c3912c73be316d71ea48"
	I1120 22:18:53.454238 1002550 cri.go:89] found id: "1560c64f26dfacbde83eecc300320a5b84c302efea1b1ce06d936589c5c29a96"
	I1120 22:18:53.454242 1002550 cri.go:89] found id: "8ceea0cc240b99fe15d8cac6aacce8187742305096eab5d78f2ca6a5cec87c90"
	I1120 22:18:53.454245 1002550 cri.go:89] found id: "3c387221343fc267293874d0cc25d9f5fba82bd20373e7422a0706579c53966f"
	I1120 22:18:53.454249 1002550 cri.go:89] found id: "9f0c71877dc9b95ffc1e640d923eae9a1f572ce5667f3ce16d8c165e843a5eb3"
	I1120 22:18:53.454255 1002550 cri.go:89] found id: "58052be823cbf5d2cb1b7278e73604249f66a05273becbd8e1db08315c2828ad"
	I1120 22:18:53.454259 1002550 cri.go:89] found id: "7e36379b8c3d46ef6b0a620644bc9c41cc65c59a2f47b7a11d658e4590de5911"
	I1120 22:18:53.454263 1002550 cri.go:89] found id: "c3511d0b771763187a5bc3795736cf83741f9ce4ddc7e64d0cecd65f6e18a4db"
	I1120 22:18:53.454270 1002550 cri.go:89] found id: "6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66"
	I1120 22:18:53.454277 1002550 cri.go:89] found id: "9e252ff958f22c644f163926d6bf7b361937414d14e4ab60cf3323e25776ac33"
	I1120 22:18:53.454284 1002550 cri.go:89] found id: ""
	I1120 22:18:53.454339 1002550 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:18:53.465705 1002550 retry.go:31] will retry after 407.138838ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:18:53Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:18:53.873261 1002550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:18:53.887292 1002550 pause.go:52] kubelet running: false
	I1120 22:18:53.887364 1002550 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:18:54.032039 1002550 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:18:54.032171 1002550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:18:54.102221 1002550 cri.go:89] found id: "281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb"
	I1120 22:18:54.102243 1002550 cri.go:89] found id: "c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499"
	I1120 22:18:54.102249 1002550 cri.go:89] found id: "306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00"
	I1120 22:18:54.102254 1002550 cri.go:89] found id: "a4a604a24a4c32db44f4b62a5104e2347a70864166bb4eba5bf30105c4e13201"
	I1120 22:18:54.102257 1002550 cri.go:89] found id: "c468b960ba6f0f4b556950a20799939d1b5d15055220c3912c73be316d71ea48"
	I1120 22:18:54.102261 1002550 cri.go:89] found id: "1560c64f26dfacbde83eecc300320a5b84c302efea1b1ce06d936589c5c29a96"
	I1120 22:18:54.102264 1002550 cri.go:89] found id: "8ceea0cc240b99fe15d8cac6aacce8187742305096eab5d78f2ca6a5cec87c90"
	I1120 22:18:54.102267 1002550 cri.go:89] found id: "3c387221343fc267293874d0cc25d9f5fba82bd20373e7422a0706579c53966f"
	I1120 22:18:54.102293 1002550 cri.go:89] found id: "9f0c71877dc9b95ffc1e640d923eae9a1f572ce5667f3ce16d8c165e843a5eb3"
	I1120 22:18:54.102307 1002550 cri.go:89] found id: "58052be823cbf5d2cb1b7278e73604249f66a05273becbd8e1db08315c2828ad"
	I1120 22:18:54.102311 1002550 cri.go:89] found id: "7e36379b8c3d46ef6b0a620644bc9c41cc65c59a2f47b7a11d658e4590de5911"
	I1120 22:18:54.102314 1002550 cri.go:89] found id: "c3511d0b771763187a5bc3795736cf83741f9ce4ddc7e64d0cecd65f6e18a4db"
	I1120 22:18:54.102318 1002550 cri.go:89] found id: "6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66"
	I1120 22:18:54.102339 1002550 cri.go:89] found id: "9e252ff958f22c644f163926d6bf7b361937414d14e4ab60cf3323e25776ac33"
	I1120 22:18:54.102343 1002550 cri.go:89] found id: ""
	I1120 22:18:54.102409 1002550 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:18:54.117048 1002550 out.go:203] 
	W1120 22:18:54.120069 1002550 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:18:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:18:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 22:18:54.120095 1002550 out.go:285] * 
	* 
	W1120 22:18:54.128766 1002550 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 22:18:54.131901 1002550 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-236741 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-236741
helpers_test.go:243: (dbg) docker inspect pause-236741:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222",
	        "Created": "2025-11-20T22:17:06.77258714Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 996437,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:17:06.837503803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222/hostname",
	        "HostsPath": "/var/lib/docker/containers/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222/hosts",
	        "LogPath": "/var/lib/docker/containers/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222-json.log",
	        "Name": "/pause-236741",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-236741:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-236741",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222",
	                "LowerDir": "/var/lib/docker/overlay2/6d40b1f01e2cec084ca86e909d4011ca0768eee8340dc52a24888a1fd2215029-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d40b1f01e2cec084ca86e909d4011ca0768eee8340dc52a24888a1fd2215029/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d40b1f01e2cec084ca86e909d4011ca0768eee8340dc52a24888a1fd2215029/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d40b1f01e2cec084ca86e909d4011ca0768eee8340dc52a24888a1fd2215029/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-236741",
	                "Source": "/var/lib/docker/volumes/pause-236741/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-236741",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-236741",
	                "name.minikube.sigs.k8s.io": "pause-236741",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4273871e11309a54ed59ac20256617c68d90f137fb9a0de995baf3456c086857",
	            "SandboxKey": "/var/run/docker/netns/4273871e1130",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34136"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34134"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34135"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-236741": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:76:65:27:cc:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a35fe1f9cf13d229aa6aea89c169dc5dfbfc3662487e82ccecb13a63f68810b5",
	                    "EndpointID": "095349012d2e054f6df6a8b1b6000282bcad5ef5272248b968975411c5b3046b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-236741",
	                        "69c555880609"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-236741 -n pause-236741
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-236741 -n pause-236741: exit status 2 (344.334762ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-236741 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-236741 logs -n 25: (1.531228345s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-787224 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:12 UTC │ 20 Nov 25 22:13 UTC │
	│ start   │ -p missing-upgrade-407986 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-407986    │ jenkins │ v1.32.0 │ 20 Nov 25 22:12 UTC │ 20 Nov 25 22:13 UTC │
	│ start   │ -p NoKubernetes-787224 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:13 UTC │
	│ delete  │ -p NoKubernetes-787224                                                                                                                   │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:13 UTC │
	│ start   │ -p NoKubernetes-787224 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:13 UTC │
	│ ssh     │ -p NoKubernetes-787224 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │                     │
	│ stop    │ -p NoKubernetes-787224                                                                                                                   │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:13 UTC │
	│ start   │ -p NoKubernetes-787224 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:14 UTC │
	│ start   │ -p missing-upgrade-407986 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-407986    │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:14 UTC │
	│ ssh     │ -p NoKubernetes-787224 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │                     │
	│ delete  │ -p NoKubernetes-787224                                                                                                                   │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:14 UTC │
	│ start   │ -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-410652 │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:14 UTC │
	│ delete  │ -p missing-upgrade-407986                                                                                                                │ missing-upgrade-407986    │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:14 UTC │
	│ stop    │ -p kubernetes-upgrade-410652                                                                                                             │ kubernetes-upgrade-410652 │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:14 UTC │
	│ start   │ -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-410652 │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │                     │
	│ start   │ -p stopped-upgrade-239493 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-239493    │ jenkins │ v1.32.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:15 UTC │
	│ stop    │ stopped-upgrade-239493 stop                                                                                                              │ stopped-upgrade-239493    │ jenkins │ v1.32.0 │ 20 Nov 25 22:15 UTC │ 20 Nov 25 22:15 UTC │
	│ start   │ -p stopped-upgrade-239493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-239493    │ jenkins │ v1.37.0 │ 20 Nov 25 22:15 UTC │ 20 Nov 25 22:15 UTC │
	│ delete  │ -p stopped-upgrade-239493                                                                                                                │ stopped-upgrade-239493    │ jenkins │ v1.37.0 │ 20 Nov 25 22:15 UTC │ 20 Nov 25 22:15 UTC │
	│ start   │ -p running-upgrade-803505 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-803505    │ jenkins │ v1.32.0 │ 20 Nov 25 22:16 UTC │ 20 Nov 25 22:16 UTC │
	│ start   │ -p running-upgrade-803505 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-803505    │ jenkins │ v1.37.0 │ 20 Nov 25 22:16 UTC │ 20 Nov 25 22:16 UTC │
	│ delete  │ -p running-upgrade-803505                                                                                                                │ running-upgrade-803505    │ jenkins │ v1.37.0 │ 20 Nov 25 22:16 UTC │ 20 Nov 25 22:17 UTC │
	│ start   │ -p pause-236741 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-236741              │ jenkins │ v1.37.0 │ 20 Nov 25 22:17 UTC │ 20 Nov 25 22:18 UTC │
	│ start   │ -p pause-236741 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-236741              │ jenkins │ v1.37.0 │ 20 Nov 25 22:18 UTC │ 20 Nov 25 22:18 UTC │
	│ pause   │ -p pause-236741 --alsologtostderr -v=5                                                                                                   │ pause-236741              │ jenkins │ v1.37.0 │ 20 Nov 25 22:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:18:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:18:22.892490 1000629 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:18:22.892661 1000629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:18:22.892682 1000629 out.go:374] Setting ErrFile to fd 2...
	I1120 22:18:22.892701 1000629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:18:22.892976 1000629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:18:22.893364 1000629 out.go:368] Setting JSON to false
	I1120 22:18:22.894368 1000629 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18028,"bootTime":1763659075,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:18:22.894478 1000629 start.go:143] virtualization:  
	I1120 22:18:22.898332 1000629 out.go:179] * [pause-236741] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:18:22.902124 1000629 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:18:22.902195 1000629 notify.go:221] Checking for updates...
	I1120 22:18:22.908120 1000629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:18:22.911192 1000629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:18:22.914093 1000629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:18:22.917633 1000629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:18:22.920582 1000629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:18:22.923974 1000629 config.go:182] Loaded profile config "pause-236741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:18:22.924580 1000629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:18:22.955247 1000629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:18:22.955428 1000629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:18:23.030755 1000629 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 22:18:23.020467619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:18:23.030867 1000629 docker.go:319] overlay module found
	I1120 22:18:23.034033 1000629 out.go:179] * Using the docker driver based on existing profile
	I1120 22:18:23.036859 1000629 start.go:309] selected driver: docker
	I1120 22:18:23.036884 1000629 start.go:930] validating driver "docker" against &{Name:pause-236741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:18:23.037017 1000629 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:18:23.037131 1000629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:18:23.104123 1000629 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 22:18:23.094952495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:18:23.104561 1000629 cni.go:84] Creating CNI manager for ""
	I1120 22:18:23.104620 1000629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:18:23.104668 1000629 start.go:353] cluster config:
	{Name:pause-236741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:18:23.109561 1000629 out.go:179] * Starting "pause-236741" primary control-plane node in "pause-236741" cluster
	I1120 22:18:23.112506 1000629 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:18:23.115448 1000629 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:18:23.118508 1000629 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:18:23.118558 1000629 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:18:23.118569 1000629 cache.go:65] Caching tarball of preloaded images
	I1120 22:18:23.118641 1000629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:18:23.118655 1000629 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:18:23.118936 1000629 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:18:23.119126 1000629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/config.json ...
	I1120 22:18:23.137964 1000629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:18:23.137988 1000629 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:18:23.138007 1000629 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:18:23.138029 1000629 start.go:360] acquireMachinesLock for pause-236741: {Name:mk1142cd143591a1f43b45a92b92df2edd3a1536 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:18:23.138097 1000629 start.go:364] duration metric: took 47.787µs to acquireMachinesLock for "pause-236741"
	I1120 22:18:23.138121 1000629 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:18:23.138127 1000629 fix.go:54] fixHost starting: 
	I1120 22:18:23.138393 1000629 cli_runner.go:164] Run: docker container inspect pause-236741 --format={{.State.Status}}
	I1120 22:18:23.155412 1000629 fix.go:112] recreateIfNeeded on pause-236741: state=Running err=<nil>
	W1120 22:18:23.155444 1000629 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 22:18:23.495145  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:23.513583  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:23.513656  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:23.555312  984680 cri.go:89] found id: ""
	I1120 22:18:23.555335  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.555345  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:23.555351  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:23.555410  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:23.597782  984680 cri.go:89] found id: ""
	I1120 22:18:23.597805  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.597813  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:23.597820  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:23.597883  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:23.638033  984680 cri.go:89] found id: ""
	I1120 22:18:23.638065  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.638074  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:23.638080  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:23.638151  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:23.669078  984680 cri.go:89] found id: ""
	I1120 22:18:23.669102  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.669112  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:23.669118  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:23.669179  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:23.717538  984680 cri.go:89] found id: ""
	I1120 22:18:23.717561  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.717569  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:23.717576  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:23.717741  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:23.751133  984680 cri.go:89] found id: ""
	I1120 22:18:23.751213  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.751225  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:23.751232  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:23.751297  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:23.781661  984680 cri.go:89] found id: ""
	I1120 22:18:23.781689  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.781697  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:23.781704  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:23.781761  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:23.829106  984680 cri.go:89] found id: ""
	I1120 22:18:23.829132  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.829141  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:23.829156  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:23.829168  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:23.920781  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:23.920804  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:23.920817  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:23.962120  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:23.962157  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:24.000093  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:24.000124  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:24.141604  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:24.141641  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:26.658755  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:26.668956  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:26.669027  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:26.693620  984680 cri.go:89] found id: ""
	I1120 22:18:26.693647  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.693656  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:26.693662  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:26.693718  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:26.719003  984680 cri.go:89] found id: ""
	I1120 22:18:26.719026  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.719042  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:26.719048  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:26.719109  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:26.743956  984680 cri.go:89] found id: ""
	I1120 22:18:26.743979  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.743987  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:26.743993  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:26.744049  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:26.776162  984680 cri.go:89] found id: ""
	I1120 22:18:26.776188  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.776197  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:26.776204  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:26.776260  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:26.806777  984680 cri.go:89] found id: ""
	I1120 22:18:26.806802  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.806812  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:26.806819  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:26.806876  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:26.832977  984680 cri.go:89] found id: ""
	I1120 22:18:26.833000  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.833009  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:26.833015  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:26.833073  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:26.860118  984680 cri.go:89] found id: ""
	I1120 22:18:26.860143  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.860153  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:26.860165  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:26.860227  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:26.890161  984680 cri.go:89] found id: ""
	I1120 22:18:26.890187  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.890197  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:26.890207  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:26.890218  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:27.008382  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:27.008428  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:27.026088  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:27.026117  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:27.094892  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:27.094915  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:27.094928  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:27.131548  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:27.131586  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:23.158662 1000629 out.go:252] * Updating the running docker "pause-236741" container ...
	I1120 22:18:23.158706 1000629 machine.go:94] provisionDockerMachine start ...
	I1120 22:18:23.158788 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:23.176630 1000629 main.go:143] libmachine: Using SSH client type: native
	I1120 22:18:23.176952 1000629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1120 22:18:23.176966 1000629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:18:23.322675 1000629 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-236741
	
	I1120 22:18:23.322700 1000629 ubuntu.go:182] provisioning hostname "pause-236741"
	I1120 22:18:23.322793 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:23.340879 1000629 main.go:143] libmachine: Using SSH client type: native
	I1120 22:18:23.341200 1000629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1120 22:18:23.341218 1000629 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-236741 && echo "pause-236741" | sudo tee /etc/hostname
	I1120 22:18:23.494133 1000629 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-236741
	
	I1120 22:18:23.494224 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:23.521405 1000629 main.go:143] libmachine: Using SSH client type: native
	I1120 22:18:23.521716 1000629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1120 22:18:23.521740 1000629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-236741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-236741/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-236741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:18:23.679482 1000629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:18:23.679555 1000629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:18:23.679604 1000629 ubuntu.go:190] setting up certificates
	I1120 22:18:23.679640 1000629 provision.go:84] configureAuth start
	I1120 22:18:23.679732 1000629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-236741
	I1120 22:18:23.706135 1000629 provision.go:143] copyHostCerts
	I1120 22:18:23.706227 1000629 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:18:23.706244 1000629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:18:23.706321 1000629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:18:23.706441 1000629 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:18:23.706447 1000629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:18:23.706473 1000629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:18:23.706523 1000629 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:18:23.706528 1000629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:18:23.706558 1000629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:18:23.706611 1000629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.pause-236741 san=[127.0.0.1 192.168.85.2 localhost minikube pause-236741]
	I1120 22:18:24.140272 1000629 provision.go:177] copyRemoteCerts
	I1120 22:18:24.140388 1000629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:18:24.140473 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:24.163738 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:24.267489 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:18:24.288465 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 22:18:24.309178 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 22:18:24.328867 1000629 provision.go:87] duration metric: took 649.187245ms to configureAuth
	I1120 22:18:24.328893 1000629 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:18:24.329132 1000629 config.go:182] Loaded profile config "pause-236741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:18:24.329247 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:24.347789 1000629 main.go:143] libmachine: Using SSH client type: native
	I1120 22:18:24.348127 1000629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1120 22:18:24.348143 1000629 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:18:29.734652 1000629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:18:29.734670 1000629 machine.go:97] duration metric: took 6.575955573s to provisionDockerMachine
	I1120 22:18:29.734680 1000629 start.go:293] postStartSetup for "pause-236741" (driver="docker")
	I1120 22:18:29.734691 1000629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:18:29.734744 1000629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:18:29.734784 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:29.757384 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:29.872703 1000629 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:18:29.877096 1000629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:18:29.877124 1000629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:18:29.877135 1000629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:18:29.877201 1000629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:18:29.877280 1000629 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:18:29.877386 1000629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:18:29.886724 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:18:29.911966 1000629 start.go:296] duration metric: took 177.269318ms for postStartSetup
	I1120 22:18:29.912121 1000629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:18:29.912171 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:29.939355 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:30.068116 1000629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:18:30.074420 1000629 fix.go:56] duration metric: took 6.936282904s for fixHost
	I1120 22:18:30.074445 1000629 start.go:83] releasing machines lock for "pause-236741", held for 6.936335516s
	I1120 22:18:30.074534 1000629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-236741
	I1120 22:18:30.096583 1000629 ssh_runner.go:195] Run: cat /version.json
	I1120 22:18:30.096638 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:30.096978 1000629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:18:30.097039 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:30.127159 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:30.137853 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:30.334858 1000629 ssh_runner.go:195] Run: systemctl --version
	I1120 22:18:30.341565 1000629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:18:30.382224 1000629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:18:30.386688 1000629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:18:30.386812 1000629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:18:30.395462 1000629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:18:30.395484 1000629 start.go:496] detecting cgroup driver to use...
	I1120 22:18:30.395514 1000629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:18:30.395570 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:18:30.411359 1000629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:18:30.424788 1000629 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:18:30.424850 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:18:30.441072 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:18:30.454903 1000629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:18:30.599117 1000629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:18:30.728486 1000629 docker.go:234] disabling docker service ...
	I1120 22:18:30.728644 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:18:30.743855 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:18:30.756926 1000629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:18:30.904985 1000629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:18:31.043369 1000629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:18:31.057981 1000629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:18:31.075538 1000629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:18:31.075616 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.085846 1000629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:18:31.085920 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.095736 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.105899 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.116071 1000629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:18:31.125152 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.134925 1000629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.144297 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.153773 1000629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:18:31.162256 1000629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:18:31.170498 1000629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:18:31.313015 1000629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:18:31.537349 1000629 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:18:31.537415 1000629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:18:31.541200 1000629 start.go:564] Will wait 60s for crictl version
	I1120 22:18:31.541272 1000629 ssh_runner.go:195] Run: which crictl
	I1120 22:18:31.544973 1000629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:18:31.572843 1000629 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:18:31.572951 1000629 ssh_runner.go:195] Run: crio --version
	I1120 22:18:31.601097 1000629 ssh_runner.go:195] Run: crio --version
	I1120 22:18:31.631630 1000629 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:18:29.663275  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:29.674602  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:29.674675  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:29.705575  984680 cri.go:89] found id: ""
	I1120 22:18:29.705598  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.705606  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:29.705613  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:29.705670  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:29.734156  984680 cri.go:89] found id: ""
	I1120 22:18:29.734179  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.734187  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:29.734193  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:29.734301  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:29.768931  984680 cri.go:89] found id: ""
	I1120 22:18:29.768954  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.768962  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:29.768969  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:29.769030  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:29.801457  984680 cri.go:89] found id: ""
	I1120 22:18:29.801480  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.801487  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:29.801493  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:29.801550  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:29.830461  984680 cri.go:89] found id: ""
	I1120 22:18:29.830485  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.830493  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:29.830500  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:29.830558  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:29.857234  984680 cri.go:89] found id: ""
	I1120 22:18:29.857256  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.857265  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:29.857271  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:29.857329  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:29.890485  984680 cri.go:89] found id: ""
	I1120 22:18:29.890509  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.890517  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:29.890523  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:29.890581  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:29.923700  984680 cri.go:89] found id: ""
	I1120 22:18:29.923723  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.923732  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:29.923741  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:29.923759  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:30.058319  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:30.058362  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:30.088909  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:30.088943  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:30.211154  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:30.211177  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:30.211190  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:30.253172  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:30.253207  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:31.634653 1000629 cli_runner.go:164] Run: docker network inspect pause-236741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:18:31.650582 1000629 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:18:31.655078 1000629 kubeadm.go:884] updating cluster {Name:pause-236741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:18:31.655225 1000629 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:18:31.655284 1000629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:18:31.689597 1000629 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:18:31.689620 1000629 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:18:31.689682 1000629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:18:31.718630 1000629 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:18:31.718656 1000629 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:18:31.718665 1000629 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1120 22:18:31.718760 1000629 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-236741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:18:31.718841 1000629 ssh_runner.go:195] Run: crio config
	I1120 22:18:31.793209 1000629 cni.go:84] Creating CNI manager for ""
	I1120 22:18:31.793246 1000629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:18:31.793265 1000629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:18:31.793289 1000629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-236741 NodeName:pause-236741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:18:31.793419 1000629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-236741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:18:31.793497 1000629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:18:31.801455 1000629 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:18:31.801598 1000629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:18:31.809830 1000629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1120 22:18:31.823345 1000629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:18:31.836683 1000629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1120 22:18:31.849565 1000629 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:18:31.853494 1000629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:18:31.987511 1000629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:18:32.001694 1000629 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741 for IP: 192.168.85.2
	I1120 22:18:32.001716 1000629 certs.go:195] generating shared ca certs ...
	I1120 22:18:32.001734 1000629 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:18:32.001875 1000629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:18:32.001938 1000629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:18:32.001949 1000629 certs.go:257] generating profile certs ...
	I1120 22:18:32.002046 1000629 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.key
	I1120 22:18:32.002116 1000629 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/apiserver.key.bfd21aee
	I1120 22:18:32.002161 1000629 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/proxy-client.key
	I1120 22:18:32.002282 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:18:32.002315 1000629 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:18:32.002331 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:18:32.002357 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:18:32.002383 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:18:32.002407 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:18:32.002453 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:18:32.003301 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:18:32.026010 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:18:32.044867 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:18:32.063025 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:18:32.083745 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 22:18:32.101538 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 22:18:32.119071 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:18:32.136479 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:18:32.153916 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:18:32.171419 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:18:32.188500 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:18:32.205791 1000629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:18:32.218381 1000629 ssh_runner.go:195] Run: openssl version
	I1120 22:18:32.225059 1000629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:18:32.232659 1000629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:18:32.240427 1000629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:18:32.244631 1000629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:18:32.244705 1000629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:18:32.288254 1000629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:18:32.295832 1000629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:18:32.303382 1000629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:18:32.310679 1000629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:18:32.314585 1000629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:18:32.314648 1000629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:18:32.355873 1000629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:18:32.363469 1000629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:18:32.371024 1000629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:18:32.378858 1000629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:18:32.383091 1000629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:18:32.383178 1000629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:18:32.424629 1000629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:18:32.432262 1000629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:18:32.436031 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:18:32.477356 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:18:32.518320 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:18:32.560277 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:18:32.601294 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:18:32.642648 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:18:32.683896 1000629 kubeadm.go:401] StartCluster: {Name:pause-236741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:18:32.684024 1000629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:18:32.684100 1000629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:18:32.711766 1000629 cri.go:89] found id: "3c387221343fc267293874d0cc25d9f5fba82bd20373e7422a0706579c53966f"
	I1120 22:18:32.711794 1000629 cri.go:89] found id: "9f0c71877dc9b95ffc1e640d923eae9a1f572ce5667f3ce16d8c165e843a5eb3"
	I1120 22:18:32.711799 1000629 cri.go:89] found id: "58052be823cbf5d2cb1b7278e73604249f66a05273becbd8e1db08315c2828ad"
	I1120 22:18:32.711803 1000629 cri.go:89] found id: "7e36379b8c3d46ef6b0a620644bc9c41cc65c59a2f47b7a11d658e4590de5911"
	I1120 22:18:32.711806 1000629 cri.go:89] found id: "c3511d0b771763187a5bc3795736cf83741f9ce4ddc7e64d0cecd65f6e18a4db"
	I1120 22:18:32.711809 1000629 cri.go:89] found id: "6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66"
	I1120 22:18:32.711812 1000629 cri.go:89] found id: "9e252ff958f22c644f163926d6bf7b361937414d14e4ab60cf3323e25776ac33"
	I1120 22:18:32.711815 1000629 cri.go:89] found id: ""
	I1120 22:18:32.711865 1000629 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:18:32.723070 1000629 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:18:32Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:18:32.723155 1000629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:18:32.731085 1000629 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:18:32.731105 1000629 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:18:32.731157 1000629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:18:32.738479 1000629 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:18:32.739195 1000629 kubeconfig.go:125] found "pause-236741" server: "https://192.168.85.2:8443"
	I1120 22:18:32.739981 1000629 kapi.go:59] client config for pause-236741: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 22:18:32.740465 1000629 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 22:18:32.740486 1000629 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 22:18:32.740492 1000629 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 22:18:32.740497 1000629 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 22:18:32.740501 1000629 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 22:18:32.740768 1000629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:18:32.749287 1000629 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1120 22:18:32.749323 1000629 kubeadm.go:602] duration metric: took 18.212109ms to restartPrimaryControlPlane
	I1120 22:18:32.749332 1000629 kubeadm.go:403] duration metric: took 65.445252ms to StartCluster
	I1120 22:18:32.749376 1000629 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:18:32.749455 1000629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:18:32.750292 1000629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:18:32.750514 1000629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:18:32.750862 1000629 config.go:182] Loaded profile config "pause-236741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:18:32.750915 1000629 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:18:32.754160 1000629 out.go:179] * Enabled addons: 
	I1120 22:18:32.754169 1000629 out.go:179] * Verifying Kubernetes components...
	I1120 22:18:32.756932 1000629 addons.go:515] duration metric: took 6.005928ms for enable addons: enabled=[]
	I1120 22:18:32.756998 1000629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:18:32.790256  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:32.801014  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:32.801084  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:32.862873  984680 cri.go:89] found id: ""
	I1120 22:18:32.862897  984680 logs.go:282] 0 containers: []
	W1120 22:18:32.862910  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:32.862916  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:32.863044  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:32.899685  984680 cri.go:89] found id: ""
	I1120 22:18:32.899707  984680 logs.go:282] 0 containers: []
	W1120 22:18:32.899716  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:32.899722  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:32.899778  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:32.934715  984680 cri.go:89] found id: ""
	I1120 22:18:32.934737  984680 logs.go:282] 0 containers: []
	W1120 22:18:32.934746  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:32.934752  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:32.934806  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:32.981430  984680 cri.go:89] found id: ""
	I1120 22:18:32.981513  984680 logs.go:282] 0 containers: []
	W1120 22:18:32.981537  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:32.981570  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:32.981650  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:33.037826  984680 cri.go:89] found id: ""
	I1120 22:18:33.037849  984680 logs.go:282] 0 containers: []
	W1120 22:18:33.037857  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:33.037864  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:33.037921  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:33.093444  984680 cri.go:89] found id: ""
	I1120 22:18:33.093466  984680 logs.go:282] 0 containers: []
	W1120 22:18:33.093474  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:33.093481  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:33.093537  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:33.151700  984680 cri.go:89] found id: ""
	I1120 22:18:33.151721  984680 logs.go:282] 0 containers: []
	W1120 22:18:33.151730  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:33.151736  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:33.151792  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:33.208295  984680 cri.go:89] found id: ""
	I1120 22:18:33.208358  984680 logs.go:282] 0 containers: []
	W1120 22:18:33.208382  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:33.208407  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:33.208443  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:33.260943  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:33.264676  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:33.314393  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:33.314418  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:33.486520  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:33.486602  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:33.509637  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:33.509664  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:33.618171  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:36.119153  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:36.139724  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:36.139842  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:36.216161  984680 cri.go:89] found id: ""
	I1120 22:18:36.216229  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.216252  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:36.216281  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:36.216360  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:36.267886  984680 cri.go:89] found id: ""
	I1120 22:18:36.267952  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.267974  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:36.267997  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:36.268074  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:36.318897  984680 cri.go:89] found id: ""
	I1120 22:18:36.318964  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.319011  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:36.319032  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:36.319163  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:36.371602  984680 cri.go:89] found id: ""
	I1120 22:18:36.371670  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.371692  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:36.371714  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:36.371798  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:36.415871  984680 cri.go:89] found id: ""
	I1120 22:18:36.415938  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.415960  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:36.415983  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:36.416060  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:36.464128  984680 cri.go:89] found id: ""
	I1120 22:18:36.464209  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.464238  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:36.464260  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:36.464342  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:36.505319  984680 cri.go:89] found id: ""
	I1120 22:18:36.505399  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.505421  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:36.505452  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:36.505529  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:36.581265  984680 cri.go:89] found id: ""
	I1120 22:18:36.581356  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.581380  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:36.581418  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:36.581454  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:36.732130  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:36.732213  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:36.753689  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:36.753716  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:36.886432  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:36.886499  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:36.886526  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:36.935400  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:36.935444  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:33.117980 1000629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:18:33.171565 1000629 node_ready.go:35] waiting up to 6m0s for node "pause-236741" to be "Ready" ...
	I1120 22:18:37.758329 1000629 node_ready.go:49] node "pause-236741" is "Ready"
	I1120 22:18:37.758361 1000629 node_ready.go:38] duration metric: took 4.586751448s for node "pause-236741" to be "Ready" ...
	I1120 22:18:37.758378 1000629 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:18:37.758438 1000629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:37.775569 1000629 api_server.go:72] duration metric: took 5.025015358s to wait for apiserver process to appear ...
	I1120 22:18:37.775603 1000629 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:18:37.775621 1000629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:18:37.790890 1000629 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 22:18:37.790919 1000629 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 22:18:39.506654  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:39.517276  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:39.517344  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:39.551102  984680 cri.go:89] found id: ""
	I1120 22:18:39.551126  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.551135  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:39.551141  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:39.551201  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:39.582589  984680 cri.go:89] found id: ""
	I1120 22:18:39.582622  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.582631  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:39.582638  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:39.582696  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:39.613586  984680 cri.go:89] found id: ""
	I1120 22:18:39.613610  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.613619  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:39.613626  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:39.613685  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:39.642082  984680 cri.go:89] found id: ""
	I1120 22:18:39.642109  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.642117  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:39.642126  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:39.642200  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:39.670446  984680 cri.go:89] found id: ""
	I1120 22:18:39.670472  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.670480  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:39.670487  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:39.670549  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:39.700154  984680 cri.go:89] found id: ""
	I1120 22:18:39.700181  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.700191  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:39.700197  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:39.700259  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:39.726587  984680 cri.go:89] found id: ""
	I1120 22:18:39.726614  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.726623  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:39.726629  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:39.726688  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:39.758324  984680 cri.go:89] found id: ""
	I1120 22:18:39.758349  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.758359  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:39.758368  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:39.758411  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:39.882985  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:39.883072  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:39.899411  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:39.899445  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:39.971211  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:39.971232  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:39.971244  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:40.010218  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:40.010267  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:38.276409 1000629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:18:38.284900 1000629 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 22:18:38.284927 1000629 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 22:18:38.776312 1000629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:18:38.784822 1000629 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:18:38.785988 1000629 api_server.go:141] control plane version: v1.34.1
	I1120 22:18:38.786016 1000629 api_server.go:131] duration metric: took 1.010406581s to wait for apiserver health ...
	I1120 22:18:38.786024 1000629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:18:38.790326 1000629 system_pods.go:59] 7 kube-system pods found
	I1120 22:18:38.790366 1000629 system_pods.go:61] "coredns-66bc5c9577-4ssl6" [2e79a16f-633f-4616-87b8-a0d635313169] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:18:38.790375 1000629 system_pods.go:61] "etcd-pause-236741" [de3ca9c3-20fe-43e0-8420-5a7b7d100a82] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:18:38.790380 1000629 system_pods.go:61] "kindnet-gbtj6" [85e46865-a2d3-4037-a84c-4ed172caf51d] Running
	I1120 22:18:38.790387 1000629 system_pods.go:61] "kube-apiserver-pause-236741" [3e1b62e0-86db-4798-bce3-30bd50540f02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:18:38.790394 1000629 system_pods.go:61] "kube-controller-manager-pause-236741" [5c5b61a2-25e9-4daa-b1eb-505512928b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:18:38.790399 1000629 system_pods.go:61] "kube-proxy-bg8b2" [e4b15707-0927-425d-8b96-e3e547526892] Running
	I1120 22:18:38.790404 1000629 system_pods.go:61] "kube-scheduler-pause-236741" [553498d5-ab29-49d0-8282-e57a04beeb0c] Running
	I1120 22:18:38.790414 1000629 system_pods.go:74] duration metric: took 4.384557ms to wait for pod list to return data ...
	I1120 22:18:38.790424 1000629 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:18:38.792907 1000629 default_sa.go:45] found service account: "default"
	I1120 22:18:38.792977 1000629 default_sa.go:55] duration metric: took 2.545871ms for default service account to be created ...
	I1120 22:18:38.793001 1000629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:18:38.797345 1000629 system_pods.go:86] 7 kube-system pods found
	I1120 22:18:38.797429 1000629 system_pods.go:89] "coredns-66bc5c9577-4ssl6" [2e79a16f-633f-4616-87b8-a0d635313169] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:18:38.797454 1000629 system_pods.go:89] "etcd-pause-236741" [de3ca9c3-20fe-43e0-8420-5a7b7d100a82] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:18:38.797495 1000629 system_pods.go:89] "kindnet-gbtj6" [85e46865-a2d3-4037-a84c-4ed172caf51d] Running
	I1120 22:18:38.797522 1000629 system_pods.go:89] "kube-apiserver-pause-236741" [3e1b62e0-86db-4798-bce3-30bd50540f02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:18:38.797542 1000629 system_pods.go:89] "kube-controller-manager-pause-236741" [5c5b61a2-25e9-4daa-b1eb-505512928b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:18:38.797577 1000629 system_pods.go:89] "kube-proxy-bg8b2" [e4b15707-0927-425d-8b96-e3e547526892] Running
	I1120 22:18:38.797600 1000629 system_pods.go:89] "kube-scheduler-pause-236741" [553498d5-ab29-49d0-8282-e57a04beeb0c] Running
	I1120 22:18:38.797621 1000629 system_pods.go:126] duration metric: took 4.601741ms to wait for k8s-apps to be running ...
	I1120 22:18:38.797654 1000629 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:18:38.797747 1000629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:18:38.812018 1000629 system_svc.go:56] duration metric: took 14.367546ms WaitForService to wait for kubelet
	I1120 22:18:38.812096 1000629 kubeadm.go:587] duration metric: took 6.061547126s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:18:38.812158 1000629 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:18:38.817284 1000629 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:18:38.817364 1000629 node_conditions.go:123] node cpu capacity is 2
	I1120 22:18:38.817391 1000629 node_conditions.go:105] duration metric: took 5.214262ms to run NodePressure ...
	I1120 22:18:38.817417 1000629 start.go:242] waiting for startup goroutines ...
	I1120 22:18:38.817450 1000629 start.go:247] waiting for cluster config update ...
	I1120 22:18:38.817476 1000629 start.go:256] writing updated cluster config ...
	I1120 22:18:38.817884 1000629 ssh_runner.go:195] Run: rm -f paused
	I1120 22:18:38.827331 1000629 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:18:38.828106 1000629 kapi.go:59] client config for pause-236741: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 22:18:38.833756 1000629 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4ssl6" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:18:40.839301 1000629 pod_ready.go:104] pod "coredns-66bc5c9577-4ssl6" is not "Ready", error: <nil>
	W1120 22:18:42.840625 1000629 pod_ready.go:104] pod "coredns-66bc5c9577-4ssl6" is not "Ready", error: <nil>
	I1120 22:18:42.579522  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:42.590029  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:42.590102  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:42.617228  984680 cri.go:89] found id: ""
	I1120 22:18:42.617305  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.617328  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:42.617351  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:42.617439  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:42.644046  984680 cri.go:89] found id: ""
	I1120 22:18:42.644109  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.644125  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:42.644131  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:42.644212  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:42.668881  984680 cri.go:89] found id: ""
	I1120 22:18:42.668905  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.668914  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:42.668920  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:42.668980  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:42.695074  984680 cri.go:89] found id: ""
	I1120 22:18:42.695097  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.695105  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:42.695111  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:42.695173  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:42.720549  984680 cri.go:89] found id: ""
	I1120 22:18:42.720626  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.720650  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:42.720665  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:42.720752  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:42.746163  984680 cri.go:89] found id: ""
	I1120 22:18:42.746186  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.746195  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:42.746233  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:42.746312  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:42.773491  984680 cri.go:89] found id: ""
	I1120 22:18:42.773514  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.773522  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:42.773529  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:42.773608  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:42.798521  984680 cri.go:89] found id: ""
	I1120 22:18:42.798544  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.798552  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:42.798593  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:42.798618  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:42.921612  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:42.921650  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:42.938199  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:42.938235  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:43.011621  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:43.011646  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:43.011659  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:43.049412  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:43.049450  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:45.584178  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:45.594605  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:45.594670  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:45.621692  984680 cri.go:89] found id: ""
	I1120 22:18:45.621717  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.621726  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:45.621733  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:45.621806  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:45.654737  984680 cri.go:89] found id: ""
	I1120 22:18:45.654764  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.654773  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:45.654779  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:45.654835  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:45.681366  984680 cri.go:89] found id: ""
	I1120 22:18:45.681403  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.681412  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:45.681420  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:45.681478  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:45.706700  984680 cri.go:89] found id: ""
	I1120 22:18:45.706726  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.706735  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:45.706742  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:45.706886  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:45.732430  984680 cri.go:89] found id: ""
	I1120 22:18:45.732455  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.732464  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:45.732470  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:45.732526  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:45.757983  984680 cri.go:89] found id: ""
	I1120 22:18:45.758058  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.758081  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:45.758117  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:45.758202  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:45.783718  984680 cri.go:89] found id: ""
	I1120 22:18:45.783740  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.783748  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:45.783754  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:45.783812  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:45.808961  984680 cri.go:89] found id: ""
	I1120 22:18:45.809025  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.809047  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:45.809073  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:45.809090  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:45.946124  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:45.946169  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:45.963361  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:45.963400  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:46.028693  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:46.028716  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:46.028730  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:46.066910  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:46.066949  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:45.339032 1000629 pod_ready.go:94] pod "coredns-66bc5c9577-4ssl6" is "Ready"
	I1120 22:18:45.339068 1000629 pod_ready.go:86] duration metric: took 6.505236592s for pod "coredns-66bc5c9577-4ssl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:45.341818 1000629 pod_ready.go:83] waiting for pod "etcd-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:47.347106 1000629 pod_ready.go:94] pod "etcd-pause-236741" is "Ready"
	I1120 22:18:47.347136 1000629 pod_ready.go:86] duration metric: took 2.0052902s for pod "etcd-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:47.349532 1000629 pod_ready.go:83] waiting for pod "kube-apiserver-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:48.857012 1000629 pod_ready.go:94] pod "kube-apiserver-pause-236741" is "Ready"
	I1120 22:18:48.857041 1000629 pod_ready.go:86] duration metric: took 1.507478468s for pod "kube-apiserver-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:48.861181 1000629 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:18:50.866315 1000629 pod_ready.go:104] pod "kube-controller-manager-pause-236741" is not "Ready", error: <nil>
	I1120 22:18:51.367168 1000629 pod_ready.go:94] pod "kube-controller-manager-pause-236741" is "Ready"
	I1120 22:18:51.367197 1000629 pod_ready.go:86] duration metric: took 2.505986512s for pod "kube-controller-manager-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.369670 1000629 pod_ready.go:83] waiting for pod "kube-proxy-bg8b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.374158 1000629 pod_ready.go:94] pod "kube-proxy-bg8b2" is "Ready"
	I1120 22:18:51.374188 1000629 pod_ready.go:86] duration metric: took 4.491857ms for pod "kube-proxy-bg8b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.376460 1000629 pod_ready.go:83] waiting for pod "kube-scheduler-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.537252 1000629 pod_ready.go:94] pod "kube-scheduler-pause-236741" is "Ready"
	I1120 22:18:51.537282 1000629 pod_ready.go:86] duration metric: took 160.798926ms for pod "kube-scheduler-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.537295 1000629 pod_ready.go:40] duration metric: took 12.709879606s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:18:51.592483 1000629 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:18:51.595556 1000629 out.go:179] * Done! kubectl is now configured to use "pause-236741" cluster and "default" namespace by default
	I1120 22:18:48.600800  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:48.611202  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:48.611273  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:48.643406  984680 cri.go:89] found id: ""
	I1120 22:18:48.643432  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.643440  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:48.643446  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:48.643509  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:48.686260  984680 cri.go:89] found id: ""
	I1120 22:18:48.686286  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.686295  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:48.686301  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:48.686359  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:48.716191  984680 cri.go:89] found id: ""
	I1120 22:18:48.716216  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.716225  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:48.716232  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:48.716289  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:48.742413  984680 cri.go:89] found id: ""
	I1120 22:18:48.742438  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.742447  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:48.742453  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:48.742513  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:48.770171  984680 cri.go:89] found id: ""
	I1120 22:18:48.770193  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.770202  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:48.770208  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:48.770274  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:48.797881  984680 cri.go:89] found id: ""
	I1120 22:18:48.797907  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.797915  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:48.797922  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:48.797982  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:48.824382  984680 cri.go:89] found id: ""
	I1120 22:18:48.824405  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.824415  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:48.824421  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:48.824480  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:48.849885  984680 cri.go:89] found id: ""
	I1120 22:18:48.849910  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.849919  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:48.849928  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:48.849941  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:48.977875  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:48.977915  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:48.994382  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:48.994410  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:49.065336  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:49.065402  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:49.065431  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:49.102035  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:49.102073  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:51.632972  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:51.653570  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:51.653657  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:51.693396  984680 cri.go:89] found id: ""
	I1120 22:18:51.693420  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.693432  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:51.693439  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:51.693505  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:51.731167  984680 cri.go:89] found id: ""
	I1120 22:18:51.731189  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.731198  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:51.731211  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:51.731267  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:51.765527  984680 cri.go:89] found id: ""
	I1120 22:18:51.765555  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.765564  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:51.765570  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:51.765627  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:51.818274  984680 cri.go:89] found id: ""
	I1120 22:18:51.818321  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.818330  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:51.818337  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:51.818407  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:51.860510  984680 cri.go:89] found id: ""
	I1120 22:18:51.860537  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.860570  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:51.860578  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:51.860649  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:51.894663  984680 cri.go:89] found id: ""
	I1120 22:18:51.894686  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.894695  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:51.894701  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:51.894767  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:51.926854  984680 cri.go:89] found id: ""
	I1120 22:18:51.926880  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.926888  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:51.926894  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:51.926955  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:51.954760  984680 cri.go:89] found id: ""
	I1120 22:18:51.954786  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.954794  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:51.954808  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:51.954820  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:52.000601  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:52.000646  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:52.051225  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:52.051253  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:52.205529  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:52.205572  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:52.223072  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:52.223104  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:52.309846  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	
	
	==> CRI-O <==
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.167410052Z" level=info msg="Started container" PID=2370 containerID=a4a604a24a4c32db44f4b62a5104e2347a70864166bb4eba5bf30105c4e13201 description=kube-system/coredns-66bc5c9577-4ssl6/coredns id=ca18e76c-7240-424a-896e-2de979f96057 name=/runtime.v1.RuntimeService/StartContainer sandboxID=274ac4a7622de0615bedc48a486364c005001a7b18883045bd3c33ee1b3b26af
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.175777668Z" level=info msg="Created container 281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb: kube-system/kube-controller-manager-pause-236741/kube-controller-manager" id=955ec58d-89a3-417b-ab21-8492b3c8db1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.177000937Z" level=info msg="Starting container: 281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb" id=3ae093f0-20c3-4898-8ad3-5970e4aeb5bc name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.186792637Z" level=info msg="Started container" PID=2404 containerID=281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb description=kube-system/kube-controller-manager-pause-236741/kube-controller-manager id=3ae093f0-20c3-4898-8ad3-5970e4aeb5bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec2b88ff7e5b62af01320bf590825b6073c408363543b08ad1c9813ede3ad1b9
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.190015915Z" level=info msg="Created container c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499: kube-system/kube-apiserver-pause-236741/kube-apiserver" id=eff5437a-5668-4edb-b4da-475c86641908 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.197513473Z" level=info msg="Created container 306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00: kube-system/kube-scheduler-pause-236741/kube-scheduler" id=9c79e0b2-f7c9-47a8-b2bd-0a9a67020f75 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.204564632Z" level=info msg="Starting container: c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499" id=5b2b3aef-aa5b-447e-b631-156bd765356f name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.205035687Z" level=info msg="Starting container: 306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00" id=f2b48e6a-3f1a-4059-8279-d88f1d5fa412 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.206754412Z" level=info msg="Started container" PID=2392 containerID=c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499 description=kube-system/kube-apiserver-pause-236741/kube-apiserver id=5b2b3aef-aa5b-447e-b631-156bd765356f name=/runtime.v1.RuntimeService/StartContainer sandboxID=57eefca9a4ec876586dbf2ea1fd1284de9d72d83718f0516abb7eb1522830280
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.213930464Z" level=info msg="Started container" PID=2395 containerID=306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00 description=kube-system/kube-scheduler-pause-236741/kube-scheduler id=f2b48e6a-3f1a-4059-8279-d88f1d5fa412 name=/runtime.v1.RuntimeService/StartContainer sandboxID=506c12662998cdd8f5c23a68e3bc8c9ec1dc7570196fbafd348684b282566994
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.484662668Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.488366257Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.488409687Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.48843413Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.491677553Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.491710956Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.491733086Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.494895752Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.494936188Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.494962962Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.498095564Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.498129575Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.498154076Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.501492785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.501527871Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	281b6ca6a9d13       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   ec2b88ff7e5b6       kube-controller-manager-pause-236741   kube-system
	c24841d4aedba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   57eefca9a4ec8       kube-apiserver-pause-236741            kube-system
	306be761b64f9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   506c12662998c       kube-scheduler-pause-236741            kube-system
	a4a604a24a4c3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   274ac4a7622de       coredns-66bc5c9577-4ssl6               kube-system
	c468b960ba6f0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   48d8fe8acbbe1       kindnet-gbtj6                          kube-system
	1560c64f26dfa       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   7db1538639a28       kube-proxy-bg8b2                       kube-system
	8ceea0cc240b9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   224066caa129e       etcd-pause-236741                      kube-system
	3c387221343fc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   274ac4a7622de       coredns-66bc5c9577-4ssl6               kube-system
	9f0c71877dc9b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   7db1538639a28       kube-proxy-bg8b2                       kube-system
	58052be823cbf       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   48d8fe8acbbe1       kindnet-gbtj6                          kube-system
	7e36379b8c3d4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   506c12662998c       kube-scheduler-pause-236741            kube-system
	c3511d0b77176       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   57eefca9a4ec8       kube-apiserver-pause-236741            kube-system
	6bf0157c5e580       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   ec2b88ff7e5b6       kube-controller-manager-pause-236741   kube-system
	9e252ff958f22       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   224066caa129e       etcd-pause-236741                      kube-system
	
	
	==> coredns [3c387221343fc267293874d0cc25d9f5fba82bd20373e7422a0706579c53966f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35147 - 57121 "HINFO IN 3839960908849394579.3685477291472904481. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014040829s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4a604a24a4c32db44f4b62a5104e2347a70864166bb4eba5bf30105c4e13201] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42202 - 21341 "HINFO IN 1381893881431481202.4857600648861538686. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032907083s
	
	
	==> describe nodes <==
	Name:               pause-236741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-236741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=pause-236741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_17_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:17:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-236741
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:18:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:18:45 +0000   Thu, 20 Nov 2025 22:17:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:18:45 +0000   Thu, 20 Nov 2025 22:17:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:18:45 +0000   Thu, 20 Nov 2025 22:17:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:18:45 +0000   Thu, 20 Nov 2025 22:18:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-236741
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                523e06a0-3ec3-47af-bbb9-b7381baa2345
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4ssl6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-236741                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-gbtj6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-236741             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-236741    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-bg8b2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-236741             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientPID     90s (x8 over 90s)  kubelet          Node pause-236741 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 90s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node pause-236741 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node pause-236741 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 90s                kubelet          Starting kubelet.
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s                kubelet          Node pause-236741 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s                kubelet          Node pause-236741 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s                kubelet          Node pause-236741 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s                node-controller  Node pause-236741 event: Registered Node pause-236741 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-236741 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-236741 event: Registered Node pause-236741 in Controller
	
	
	==> dmesg <==
	[Nov20 21:39] overlayfs: idmapped layers are currently not supported
	[Nov20 21:41] overlayfs: idmapped layers are currently not supported
	[Nov20 21:46] overlayfs: idmapped layers are currently not supported
	[  +2.922279] overlayfs: idmapped layers are currently not supported
	[Nov20 21:48] overlayfs: idmapped layers are currently not supported
	[Nov20 21:52] overlayfs: idmapped layers are currently not supported
	[Nov20 21:54] overlayfs: idmapped layers are currently not supported
	[Nov20 21:59] overlayfs: idmapped layers are currently not supported
	[Nov20 22:00] overlayfs: idmapped layers are currently not supported
	[Nov20 22:01] overlayfs: idmapped layers are currently not supported
	[Nov20 22:02] overlayfs: idmapped layers are currently not supported
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8ceea0cc240b99fe15d8cac6aacce8187742305096eab5d78f2ca6a5cec87c90] <==
	{"level":"warn","ts":"2025-11-20T22:18:35.865454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.885483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.908035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.923190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.944700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.977287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.991185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.023469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.039639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.068091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.085087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.104033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.131916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.158338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.190726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.234663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.266612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.304776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.339179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.373547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.440004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.468974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.498821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.522887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.727336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58006","server-name":"","error":"EOF"}
	
	
	==> etcd [9e252ff958f22c644f163926d6bf7b361937414d14e4ab60cf3323e25776ac33] <==
	{"level":"warn","ts":"2025-11-20T22:17:29.667195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.683071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.705183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.738818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.797463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.809804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.866937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57164","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T22:18:24.510501Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-20T22:18:24.510554Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-236741","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-20T22:18:24.510648Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-20T22:18:24.784112Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-20T22:18:24.784198Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T22:18:24.784219Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-20T22:18:24.784274Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-20T22:18:24.784352Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-20T22:18:24.784349Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-20T22:18:24.784379Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-20T22:18:24.784386Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-20T22:18:24.784423Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-20T22:18:24.784431Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-20T22:18:24.784437Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T22:18:24.787676Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-20T22:18:24.787762Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T22:18:24.787793Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T22:18:24.787800Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-236741","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 22:18:55 up  5:01,  0 user,  load average: 2.89, 2.73, 2.07
	Linux pause-236741 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [58052be823cbf5d2cb1b7278e73604249f66a05273becbd8e1db08315c2828ad] <==
	I1120 22:17:39.219435       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:17:39.219884       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:17:39.220061       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:17:39.220103       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:17:39.220139       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:17:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:17:39.411444       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:17:39.411519       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:17:39.411552       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:17:39.503786       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:18:09.412361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:18:09.504015       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:18:09.504141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:18:09.504234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 22:18:10.704301       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:18:10.704365       1 metrics.go:72] Registering metrics
	I1120 22:18:10.704436       1 controller.go:711] "Syncing nftables rules"
	I1120 22:18:19.418235       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:18:19.418398       1 main.go:301] handling current node
	
	
	==> kindnet [c468b960ba6f0f4b556950a20799939d1b5d15055220c3912c73be316d71ea48] <==
	I1120 22:18:33.241144       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:18:33.243492       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:18:33.243635       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:18:33.243649       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:18:33.243678       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:18:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:18:33.484340       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:18:33.484444       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:18:33.484477       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:18:33.485254       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:18:37.829401       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:18:37.829570       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:18:37.829659       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 22:18:37.829743       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1120 22:18:39.284776       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:18:39.284885       1 metrics.go:72] Registering metrics
	I1120 22:18:39.284960       1 controller.go:711] "Syncing nftables rules"
	I1120 22:18:43.484270       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:18:43.484310       1 main.go:301] handling current node
	I1120 22:18:53.484622       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:18:53.484680       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499] <==
	I1120 22:18:37.846086       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 22:18:37.846117       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 22:18:37.846241       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 22:18:37.846299       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 22:18:37.851075       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 22:18:37.851164       1 policy_source.go:240] refreshing policies
	I1120 22:18:37.854475       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 22:18:37.856269       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 22:18:37.856745       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 22:18:37.856811       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:18:37.857089       1 aggregator.go:171] initial CRD sync complete...
	I1120 22:18:37.857309       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 22:18:37.857337       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:18:37.857365       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:18:37.857154       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:18:37.869065       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:18:37.880218       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1120 22:18:37.895604       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 22:18:37.909383       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:18:38.449218       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:18:38.795943       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:18:40.341874       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:18:40.441630       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:18:40.490519       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:18:40.592270       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [c3511d0b771763187a5bc3795736cf83741f9ce4ddc7e64d0cecd65f6e18a4db] <==
	W1120 22:18:24.524228       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524283       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524303       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524372       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524436       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524502       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524572       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524631       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524702       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524767       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524846       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524925       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.525646       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.525802       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.525902       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526024       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526136       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526218       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526274       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526352       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526425       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526552       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526659       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526741       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.528948       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb] <==
	I1120 22:18:40.188351       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 22:18:40.193528       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 22:18:40.193659       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 22:18:40.193712       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 22:18:40.193761       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 22:18:40.193791       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 22:18:40.193812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:18:40.193833       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:18:40.193840       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:18:40.193915       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:18:40.198235       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 22:18:40.202403       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 22:18:40.205171       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 22:18:40.212661       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 22:18:40.213964       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:18:40.217127       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:18:40.232765       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 22:18:40.234028       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 22:18:40.234081       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:18:40.234175       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:18:40.234259       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 22:18:40.234834       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 22:18:40.234861       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:18:40.235495       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 22:18:40.243794       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-controller-manager [6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66] <==
	I1120 22:17:37.626766       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 22:17:37.629010       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 22:17:37.632280       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 22:17:37.632351       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 22:17:37.632394       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 22:17:37.632400       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 22:17:37.632406       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 22:17:37.642790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:17:37.642935       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-236741" podCIDRs=["10.244.0.0/24"]
	I1120 22:17:37.647071       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 22:17:37.655393       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:17:37.662459       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:17:37.664335       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 22:17:37.664449       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 22:17:37.664465       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 22:17:37.664738       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 22:17:37.664838       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 22:17:37.664850       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:17:37.664863       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 22:17:37.668391       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 22:17:37.671589       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:17:37.675115       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:17:37.675140       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:17:37.675150       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:18:22.616888       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1560c64f26dfacbde83eecc300320a5b84c302efea1b1ce06d936589c5c29a96] <==
	I1120 22:18:34.921974       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:18:36.485655       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1120 22:18:37.832889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-236741\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1120 22:18:38.886328       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:18:38.886399       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:18:38.886537       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:18:38.930617       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:18:38.930753       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:18:38.937555       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:18:38.937950       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:18:38.938143       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:18:38.939512       1 config.go:200] "Starting service config controller"
	I1120 22:18:38.943828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:18:38.939616       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:18:38.943982       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:18:38.945149       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 22:18:38.940293       1 config.go:309] "Starting node config controller"
	I1120 22:18:38.945272       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:18:38.945307       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:18:38.939630       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:18:38.945366       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:18:38.945393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:18:39.045010       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [9f0c71877dc9b95ffc1e640d923eae9a1f572ce5667f3ce16d8c165e843a5eb3] <==
	I1120 22:17:39.168235       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:17:39.305102       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:17:39.411665       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:17:39.411839       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:17:39.411944       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:17:39.607707       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:17:39.607761       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:17:39.611898       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:17:39.612204       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:17:39.612274       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:17:39.616049       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:17:39.616069       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:17:39.616357       1 config.go:200] "Starting service config controller"
	I1120 22:17:39.616371       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:17:39.616664       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:17:39.616679       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:17:39.617055       1 config.go:309] "Starting node config controller"
	I1120 22:17:39.617077       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:17:39.617084       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:17:39.717509       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:17:39.718378       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 22:17:39.718395       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00] <==
	I1120 22:18:36.064283       1 serving.go:386] Generated self-signed cert in-memory
	I1120 22:18:38.328942       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:18:38.329115       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:18:38.335159       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 22:18:38.335358       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 22:18:38.335439       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:18:38.335475       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:18:38.335520       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:18:38.335552       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:18:38.336659       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:18:38.336739       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 22:18:38.435801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:18:38.435931       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 22:18:38.436030       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [7e36379b8c3d46ef6b0a620644bc9c41cc65c59a2f47b7a11d658e4590de5911] <==
	E1120 22:17:30.720563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 22:17:30.720483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 22:17:30.723097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 22:17:31.611265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 22:17:31.613586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 22:17:31.653130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 22:17:31.656347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 22:17:31.657345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 22:17:31.692649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 22:17:31.725664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 22:17:31.731242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 22:17:31.799212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 22:17:31.816624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 22:17:31.848685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 22:17:31.869990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 22:17:31.872407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 22:17:31.990537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 22:17:32.003320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1120 22:17:34.588055       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:18:24.512376       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1120 22:18:24.512477       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1120 22:18:24.512490       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1120 22:18:24.512507       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:18:24.512732       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1120 22:18:24.512748       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.008034    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5f01ce7a736f51f15bbab27dfff545a1" pod="kube-system/kube-scheduler-pause-236741"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.008533    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gbtj6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85e46865-a2d3-4037-a84c-4ed172caf51d" pod="kube-system/kindnet-gbtj6"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.008944    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bg8b2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e4b15707-0927-425d-8b96-e3e547526892" pod="kube-system/kube-proxy-bg8b2"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: I1120 22:18:33.013384    1323 scope.go:117] "RemoveContainer" containerID="6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.013982    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5f01ce7a736f51f15bbab27dfff545a1" pod="kube-system/kube-scheduler-pause-236741"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.014744    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gbtj6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85e46865-a2d3-4037-a84c-4ed172caf51d" pod="kube-system/kindnet-gbtj6"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.015135    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bg8b2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e4b15707-0927-425d-8b96-e3e547526892" pod="kube-system/kube-proxy-bg8b2"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.015562    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-4ssl6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2e79a16f-633f-4616-87b8-a0d635313169" pod="kube-system/coredns-66bc5c9577-4ssl6"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.015934    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="65382a64e4a7b502e66482a2d869a89c" pod="kube-system/etcd-pause-236741"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.016298    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bb68a040a67dbc228ff2646329c0fe18" pod="kube-system/kube-apiserver-pause-236741"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.016629    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4984d56f3bf65bc5533b99a2aff01656" pod="kube-system/kube-controller-manager-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.734742    1323 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-236741\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.735519    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="4984d56f3bf65bc5533b99a2aff01656" pod="kube-system/kube-controller-manager-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.736639    1323 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-236741\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.769652    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="5f01ce7a736f51f15bbab27dfff545a1" pod="kube-system/kube-scheduler-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.795776    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-gbtj6\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="85e46865-a2d3-4037-a84c-4ed172caf51d" pod="kube-system/kindnet-gbtj6"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.804171    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-bg8b2\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="e4b15707-0927-425d-8b96-e3e547526892" pod="kube-system/kube-proxy-bg8b2"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.815657    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-4ssl6\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="2e79a16f-633f-4616-87b8-a0d635313169" pod="kube-system/coredns-66bc5c9577-4ssl6"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.823295    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="65382a64e4a7b502e66482a2d869a89c" pod="kube-system/etcd-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.828362    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="bb68a040a67dbc228ff2646329c0fe18" pod="kube-system/kube-apiserver-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.832019    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="5f01ce7a736f51f15bbab27dfff545a1" pod="kube-system/kube-scheduler-pause-236741"
	Nov 20 22:18:43 pause-236741 kubelet[1323]: W1120 22:18:43.846731    1323 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 20 22:18:52 pause-236741 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:18:52 pause-236741 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:18:52 pause-236741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-236741 -n pause-236741
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-236741 -n pause-236741: exit status 2 (367.60629ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-236741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-236741
helpers_test.go:243: (dbg) docker inspect pause-236741:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222",
	        "Created": "2025-11-20T22:17:06.77258714Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 996437,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:17:06.837503803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222/hostname",
	        "HostsPath": "/var/lib/docker/containers/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222/hosts",
	        "LogPath": "/var/lib/docker/containers/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222/69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222-json.log",
	        "Name": "/pause-236741",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-236741:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-236741",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "69c555880609dabd9a0f02dd09d05fa0d4f4d0643626622765a1d814f1119222",
	                "LowerDir": "/var/lib/docker/overlay2/6d40b1f01e2cec084ca86e909d4011ca0768eee8340dc52a24888a1fd2215029-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d40b1f01e2cec084ca86e909d4011ca0768eee8340dc52a24888a1fd2215029/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d40b1f01e2cec084ca86e909d4011ca0768eee8340dc52a24888a1fd2215029/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d40b1f01e2cec084ca86e909d4011ca0768eee8340dc52a24888a1fd2215029/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-236741",
	                "Source": "/var/lib/docker/volumes/pause-236741/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-236741",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-236741",
	                "name.minikube.sigs.k8s.io": "pause-236741",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4273871e11309a54ed59ac20256617c68d90f137fb9a0de995baf3456c086857",
	            "SandboxKey": "/var/run/docker/netns/4273871e1130",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34136"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34134"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34135"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-236741": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:76:65:27:cc:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a35fe1f9cf13d229aa6aea89c169dc5dfbfc3662487e82ccecb13a63f68810b5",
	                    "EndpointID": "095349012d2e054f6df6a8b1b6000282bcad5ef5272248b968975411c5b3046b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-236741",
	                        "69c555880609"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-236741 -n pause-236741
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-236741 -n pause-236741: exit status 2 (355.257052ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-236741 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-236741 logs -n 25: (1.455806553s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-787224 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:12 UTC │ 20 Nov 25 22:13 UTC │
	│ start   │ -p missing-upgrade-407986 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-407986    │ jenkins │ v1.32.0 │ 20 Nov 25 22:12 UTC │ 20 Nov 25 22:13 UTC │
	│ start   │ -p NoKubernetes-787224 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:13 UTC │
	│ delete  │ -p NoKubernetes-787224                                                                                                                   │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:13 UTC │
	│ start   │ -p NoKubernetes-787224 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:13 UTC │
	│ ssh     │ -p NoKubernetes-787224 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │                     │
	│ stop    │ -p NoKubernetes-787224                                                                                                                   │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:13 UTC │
	│ start   │ -p NoKubernetes-787224 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:14 UTC │
	│ start   │ -p missing-upgrade-407986 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-407986    │ jenkins │ v1.37.0 │ 20 Nov 25 22:13 UTC │ 20 Nov 25 22:14 UTC │
	│ ssh     │ -p NoKubernetes-787224 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │                     │
	│ delete  │ -p NoKubernetes-787224                                                                                                                   │ NoKubernetes-787224       │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:14 UTC │
	│ start   │ -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-410652 │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:14 UTC │
	│ delete  │ -p missing-upgrade-407986                                                                                                                │ missing-upgrade-407986    │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:14 UTC │
	│ stop    │ -p kubernetes-upgrade-410652                                                                                                             │ kubernetes-upgrade-410652 │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:14 UTC │
	│ start   │ -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-410652 │ jenkins │ v1.37.0 │ 20 Nov 25 22:14 UTC │                     │
	│ start   │ -p stopped-upgrade-239493 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-239493    │ jenkins │ v1.32.0 │ 20 Nov 25 22:14 UTC │ 20 Nov 25 22:15 UTC │
	│ stop    │ stopped-upgrade-239493 stop                                                                                                              │ stopped-upgrade-239493    │ jenkins │ v1.32.0 │ 20 Nov 25 22:15 UTC │ 20 Nov 25 22:15 UTC │
	│ start   │ -p stopped-upgrade-239493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-239493    │ jenkins │ v1.37.0 │ 20 Nov 25 22:15 UTC │ 20 Nov 25 22:15 UTC │
	│ delete  │ -p stopped-upgrade-239493                                                                                                                │ stopped-upgrade-239493    │ jenkins │ v1.37.0 │ 20 Nov 25 22:15 UTC │ 20 Nov 25 22:15 UTC │
	│ start   │ -p running-upgrade-803505 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-803505    │ jenkins │ v1.32.0 │ 20 Nov 25 22:16 UTC │ 20 Nov 25 22:16 UTC │
	│ start   │ -p running-upgrade-803505 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-803505    │ jenkins │ v1.37.0 │ 20 Nov 25 22:16 UTC │ 20 Nov 25 22:16 UTC │
	│ delete  │ -p running-upgrade-803505                                                                                                                │ running-upgrade-803505    │ jenkins │ v1.37.0 │ 20 Nov 25 22:16 UTC │ 20 Nov 25 22:17 UTC │
	│ start   │ -p pause-236741 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-236741              │ jenkins │ v1.37.0 │ 20 Nov 25 22:17 UTC │ 20 Nov 25 22:18 UTC │
	│ start   │ -p pause-236741 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-236741              │ jenkins │ v1.37.0 │ 20 Nov 25 22:18 UTC │ 20 Nov 25 22:18 UTC │
	│ pause   │ -p pause-236741 --alsologtostderr -v=5                                                                                                   │ pause-236741              │ jenkins │ v1.37.0 │ 20 Nov 25 22:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:18:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:18:22.892490 1000629 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:18:22.892661 1000629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:18:22.892682 1000629 out.go:374] Setting ErrFile to fd 2...
	I1120 22:18:22.892701 1000629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:18:22.892976 1000629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:18:22.893364 1000629 out.go:368] Setting JSON to false
	I1120 22:18:22.894368 1000629 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18028,"bootTime":1763659075,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:18:22.894478 1000629 start.go:143] virtualization:  
	I1120 22:18:22.898332 1000629 out.go:179] * [pause-236741] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:18:22.902124 1000629 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:18:22.902195 1000629 notify.go:221] Checking for updates...
	I1120 22:18:22.908120 1000629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:18:22.911192 1000629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:18:22.914093 1000629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:18:22.917633 1000629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:18:22.920582 1000629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:18:22.923974 1000629 config.go:182] Loaded profile config "pause-236741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:18:22.924580 1000629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:18:22.955247 1000629 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:18:22.955428 1000629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:18:23.030755 1000629 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 22:18:23.020467619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:18:23.030867 1000629 docker.go:319] overlay module found
	I1120 22:18:23.034033 1000629 out.go:179] * Using the docker driver based on existing profile
	I1120 22:18:23.036859 1000629 start.go:309] selected driver: docker
	I1120 22:18:23.036884 1000629 start.go:930] validating driver "docker" against &{Name:pause-236741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:18:23.037017 1000629 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:18:23.037131 1000629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:18:23.104123 1000629 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 22:18:23.094952495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:18:23.104561 1000629 cni.go:84] Creating CNI manager for ""
	I1120 22:18:23.104620 1000629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:18:23.104668 1000629 start.go:353] cluster config:
	{Name:pause-236741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:18:23.109561 1000629 out.go:179] * Starting "pause-236741" primary control-plane node in "pause-236741" cluster
	I1120 22:18:23.112506 1000629 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:18:23.115448 1000629 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:18:23.118508 1000629 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:18:23.118558 1000629 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:18:23.118569 1000629 cache.go:65] Caching tarball of preloaded images
	I1120 22:18:23.118641 1000629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:18:23.118655 1000629 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:18:23.118936 1000629 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:18:23.119126 1000629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/config.json ...
	I1120 22:18:23.137964 1000629 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:18:23.137988 1000629 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:18:23.138007 1000629 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:18:23.138029 1000629 start.go:360] acquireMachinesLock for pause-236741: {Name:mk1142cd143591a1f43b45a92b92df2edd3a1536 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:18:23.138097 1000629 start.go:364] duration metric: took 47.787µs to acquireMachinesLock for "pause-236741"
	I1120 22:18:23.138121 1000629 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:18:23.138127 1000629 fix.go:54] fixHost starting: 
	I1120 22:18:23.138393 1000629 cli_runner.go:164] Run: docker container inspect pause-236741 --format={{.State.Status}}
	I1120 22:18:23.155412 1000629 fix.go:112] recreateIfNeeded on pause-236741: state=Running err=<nil>
	W1120 22:18:23.155444 1000629 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 22:18:23.495145  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:23.513583  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:23.513656  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:23.555312  984680 cri.go:89] found id: ""
	I1120 22:18:23.555335  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.555345  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:23.555351  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:23.555410  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:23.597782  984680 cri.go:89] found id: ""
	I1120 22:18:23.597805  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.597813  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:23.597820  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:23.597883  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:23.638033  984680 cri.go:89] found id: ""
	I1120 22:18:23.638065  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.638074  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:23.638080  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:23.638151  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:23.669078  984680 cri.go:89] found id: ""
	I1120 22:18:23.669102  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.669112  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:23.669118  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:23.669179  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:23.717538  984680 cri.go:89] found id: ""
	I1120 22:18:23.717561  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.717569  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:23.717576  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:23.717741  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:23.751133  984680 cri.go:89] found id: ""
	I1120 22:18:23.751213  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.751225  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:23.751232  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:23.751297  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:23.781661  984680 cri.go:89] found id: ""
	I1120 22:18:23.781689  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.781697  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:23.781704  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:23.781761  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:23.829106  984680 cri.go:89] found id: ""
	I1120 22:18:23.829132  984680 logs.go:282] 0 containers: []
	W1120 22:18:23.829141  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:23.829156  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:23.829168  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:23.920781  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:23.920804  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:23.920817  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:23.962120  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:23.962157  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:24.000093  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:24.000124  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:24.141604  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:24.141641  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:26.658755  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:26.668956  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:26.669027  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:26.693620  984680 cri.go:89] found id: ""
	I1120 22:18:26.693647  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.693656  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:26.693662  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:26.693718  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:26.719003  984680 cri.go:89] found id: ""
	I1120 22:18:26.719026  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.719042  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:26.719048  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:26.719109  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:26.743956  984680 cri.go:89] found id: ""
	I1120 22:18:26.743979  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.743987  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:26.743993  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:26.744049  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:26.776162  984680 cri.go:89] found id: ""
	I1120 22:18:26.776188  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.776197  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:26.776204  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:26.776260  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:26.806777  984680 cri.go:89] found id: ""
	I1120 22:18:26.806802  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.806812  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:26.806819  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:26.806876  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:26.832977  984680 cri.go:89] found id: ""
	I1120 22:18:26.833000  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.833009  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:26.833015  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:26.833073  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:26.860118  984680 cri.go:89] found id: ""
	I1120 22:18:26.860143  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.860153  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:26.860165  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:26.860227  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:26.890161  984680 cri.go:89] found id: ""
	I1120 22:18:26.890187  984680 logs.go:282] 0 containers: []
	W1120 22:18:26.890197  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:26.890207  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:26.890218  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:27.008382  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:27.008428  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:27.026088  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:27.026117  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:27.094892  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:27.094915  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:27.094928  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:27.131548  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:27.131586  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:23.158662 1000629 out.go:252] * Updating the running docker "pause-236741" container ...
	I1120 22:18:23.158706 1000629 machine.go:94] provisionDockerMachine start ...
	I1120 22:18:23.158788 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:23.176630 1000629 main.go:143] libmachine: Using SSH client type: native
	I1120 22:18:23.176952 1000629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1120 22:18:23.176966 1000629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:18:23.322675 1000629 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-236741
	
	I1120 22:18:23.322700 1000629 ubuntu.go:182] provisioning hostname "pause-236741"
	I1120 22:18:23.322793 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:23.340879 1000629 main.go:143] libmachine: Using SSH client type: native
	I1120 22:18:23.341200 1000629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1120 22:18:23.341218 1000629 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-236741 && echo "pause-236741" | sudo tee /etc/hostname
	I1120 22:18:23.494133 1000629 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-236741
	
	I1120 22:18:23.494224 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:23.521405 1000629 main.go:143] libmachine: Using SSH client type: native
	I1120 22:18:23.521716 1000629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1120 22:18:23.521740 1000629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-236741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-236741/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-236741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:18:23.679482 1000629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:18:23.679555 1000629 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:18:23.679604 1000629 ubuntu.go:190] setting up certificates
	I1120 22:18:23.679640 1000629 provision.go:84] configureAuth start
	I1120 22:18:23.679732 1000629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-236741
	I1120 22:18:23.706135 1000629 provision.go:143] copyHostCerts
	I1120 22:18:23.706227 1000629 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:18:23.706244 1000629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:18:23.706321 1000629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:18:23.706441 1000629 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:18:23.706447 1000629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:18:23.706473 1000629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:18:23.706523 1000629 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:18:23.706528 1000629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:18:23.706558 1000629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:18:23.706611 1000629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.pause-236741 san=[127.0.0.1 192.168.85.2 localhost minikube pause-236741]
	I1120 22:18:24.140272 1000629 provision.go:177] copyRemoteCerts
	I1120 22:18:24.140388 1000629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:18:24.140473 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:24.163738 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:24.267489 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:18:24.288465 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 22:18:24.309178 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 22:18:24.328867 1000629 provision.go:87] duration metric: took 649.187245ms to configureAuth
	I1120 22:18:24.328893 1000629 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:18:24.329132 1000629 config.go:182] Loaded profile config "pause-236741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:18:24.329247 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:24.347789 1000629 main.go:143] libmachine: Using SSH client type: native
	I1120 22:18:24.348127 1000629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1120 22:18:24.348143 1000629 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:18:29.734652 1000629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:18:29.734670 1000629 machine.go:97] duration metric: took 6.575955573s to provisionDockerMachine
	I1120 22:18:29.734680 1000629 start.go:293] postStartSetup for "pause-236741" (driver="docker")
	I1120 22:18:29.734691 1000629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:18:29.734744 1000629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:18:29.734784 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:29.757384 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:29.872703 1000629 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:18:29.877096 1000629 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:18:29.877124 1000629 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:18:29.877135 1000629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:18:29.877201 1000629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:18:29.877280 1000629 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:18:29.877386 1000629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:18:29.886724 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:18:29.911966 1000629 start.go:296] duration metric: took 177.269318ms for postStartSetup
	I1120 22:18:29.912121 1000629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:18:29.912171 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:29.939355 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:30.068116 1000629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:18:30.074420 1000629 fix.go:56] duration metric: took 6.936282904s for fixHost
	I1120 22:18:30.074445 1000629 start.go:83] releasing machines lock for "pause-236741", held for 6.936335516s
	I1120 22:18:30.074534 1000629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-236741
	I1120 22:18:30.096583 1000629 ssh_runner.go:195] Run: cat /version.json
	I1120 22:18:30.096638 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:30.096978 1000629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:18:30.097039 1000629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-236741
	I1120 22:18:30.127159 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:30.137853 1000629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/pause-236741/id_rsa Username:docker}
	I1120 22:18:30.334858 1000629 ssh_runner.go:195] Run: systemctl --version
	I1120 22:18:30.341565 1000629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:18:30.382224 1000629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:18:30.386688 1000629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:18:30.386812 1000629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:18:30.395462 1000629 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:18:30.395484 1000629 start.go:496] detecting cgroup driver to use...
	I1120 22:18:30.395514 1000629 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:18:30.395570 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:18:30.411359 1000629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:18:30.424788 1000629 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:18:30.424850 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:18:30.441072 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:18:30.454903 1000629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:18:30.599117 1000629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:18:30.728486 1000629 docker.go:234] disabling docker service ...
	I1120 22:18:30.728644 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:18:30.743855 1000629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:18:30.756926 1000629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:18:30.904985 1000629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:18:31.043369 1000629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:18:31.057981 1000629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:18:31.075538 1000629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:18:31.075616 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.085846 1000629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:18:31.085920 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.095736 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.105899 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.116071 1000629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:18:31.125152 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.134925 1000629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.144297 1000629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:18:31.153773 1000629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:18:31.162256 1000629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:18:31.170498 1000629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:18:31.313015 1000629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:18:31.537349 1000629 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:18:31.537415 1000629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:18:31.541200 1000629 start.go:564] Will wait 60s for crictl version
	I1120 22:18:31.541272 1000629 ssh_runner.go:195] Run: which crictl
	I1120 22:18:31.544973 1000629 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:18:31.572843 1000629 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:18:31.572951 1000629 ssh_runner.go:195] Run: crio --version
	I1120 22:18:31.601097 1000629 ssh_runner.go:195] Run: crio --version
	I1120 22:18:31.631630 1000629 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:18:29.663275  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:29.674602  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:29.674675  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:29.705575  984680 cri.go:89] found id: ""
	I1120 22:18:29.705598  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.705606  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:29.705613  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:29.705670  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:29.734156  984680 cri.go:89] found id: ""
	I1120 22:18:29.734179  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.734187  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:29.734193  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:29.734301  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:29.768931  984680 cri.go:89] found id: ""
	I1120 22:18:29.768954  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.768962  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:29.768969  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:29.769030  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:29.801457  984680 cri.go:89] found id: ""
	I1120 22:18:29.801480  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.801487  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:29.801493  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:29.801550  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:29.830461  984680 cri.go:89] found id: ""
	I1120 22:18:29.830485  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.830493  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:29.830500  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:29.830558  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:29.857234  984680 cri.go:89] found id: ""
	I1120 22:18:29.857256  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.857265  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:29.857271  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:29.857329  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:29.890485  984680 cri.go:89] found id: ""
	I1120 22:18:29.890509  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.890517  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:29.890523  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:29.890581  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:29.923700  984680 cri.go:89] found id: ""
	I1120 22:18:29.923723  984680 logs.go:282] 0 containers: []
	W1120 22:18:29.923732  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:29.923741  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:29.923759  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:30.058319  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:30.058362  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:30.088909  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:30.088943  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:30.211154  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:30.211177  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:30.211190  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:30.253172  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:30.253207  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:31.634653 1000629 cli_runner.go:164] Run: docker network inspect pause-236741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:18:31.650582 1000629 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:18:31.655078 1000629 kubeadm.go:884] updating cluster {Name:pause-236741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:18:31.655225 1000629 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:18:31.655284 1000629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:18:31.689597 1000629 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:18:31.689620 1000629 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:18:31.689682 1000629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:18:31.718630 1000629 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:18:31.718656 1000629 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:18:31.718665 1000629 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1120 22:18:31.718760 1000629 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-236741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:18:31.718841 1000629 ssh_runner.go:195] Run: crio config
	I1120 22:18:31.793209 1000629 cni.go:84] Creating CNI manager for ""
	I1120 22:18:31.793246 1000629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:18:31.793265 1000629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:18:31.793289 1000629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-236741 NodeName:pause-236741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:18:31.793419 1000629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-236741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:18:31.793497 1000629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:18:31.801455 1000629 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:18:31.801598 1000629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:18:31.809830 1000629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1120 22:18:31.823345 1000629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:18:31.836683 1000629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1120 22:18:31.849565 1000629 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:18:31.853494 1000629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:18:31.987511 1000629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:18:32.001694 1000629 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741 for IP: 192.168.85.2
	I1120 22:18:32.001716 1000629 certs.go:195] generating shared ca certs ...
	I1120 22:18:32.001734 1000629 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:18:32.001875 1000629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:18:32.001938 1000629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:18:32.001949 1000629 certs.go:257] generating profile certs ...
	I1120 22:18:32.002046 1000629 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.key
	I1120 22:18:32.002116 1000629 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/apiserver.key.bfd21aee
	I1120 22:18:32.002161 1000629 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/proxy-client.key
	I1120 22:18:32.002282 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:18:32.002315 1000629 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:18:32.002331 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:18:32.002357 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:18:32.002383 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:18:32.002407 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:18:32.002453 1000629 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:18:32.003301 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:18:32.026010 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:18:32.044867 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:18:32.063025 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:18:32.083745 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 22:18:32.101538 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 22:18:32.119071 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:18:32.136479 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:18:32.153916 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:18:32.171419 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:18:32.188500 1000629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:18:32.205791 1000629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:18:32.218381 1000629 ssh_runner.go:195] Run: openssl version
	I1120 22:18:32.225059 1000629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:18:32.232659 1000629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:18:32.240427 1000629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:18:32.244631 1000629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:18:32.244705 1000629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:18:32.288254 1000629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:18:32.295832 1000629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:18:32.303382 1000629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:18:32.310679 1000629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:18:32.314585 1000629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:18:32.314648 1000629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:18:32.355873 1000629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:18:32.363469 1000629 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:18:32.371024 1000629 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:18:32.378858 1000629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:18:32.383091 1000629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:18:32.383178 1000629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:18:32.424629 1000629 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:18:32.432262 1000629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:18:32.436031 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:18:32.477356 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:18:32.518320 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:18:32.560277 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:18:32.601294 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:18:32.642648 1000629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:18:32.683896 1000629 kubeadm.go:401] StartCluster: {Name:pause-236741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-236741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:18:32.684024 1000629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:18:32.684100 1000629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:18:32.711766 1000629 cri.go:89] found id: "3c387221343fc267293874d0cc25d9f5fba82bd20373e7422a0706579c53966f"
	I1120 22:18:32.711794 1000629 cri.go:89] found id: "9f0c71877dc9b95ffc1e640d923eae9a1f572ce5667f3ce16d8c165e843a5eb3"
	I1120 22:18:32.711799 1000629 cri.go:89] found id: "58052be823cbf5d2cb1b7278e73604249f66a05273becbd8e1db08315c2828ad"
	I1120 22:18:32.711803 1000629 cri.go:89] found id: "7e36379b8c3d46ef6b0a620644bc9c41cc65c59a2f47b7a11d658e4590de5911"
	I1120 22:18:32.711806 1000629 cri.go:89] found id: "c3511d0b771763187a5bc3795736cf83741f9ce4ddc7e64d0cecd65f6e18a4db"
	I1120 22:18:32.711809 1000629 cri.go:89] found id: "6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66"
	I1120 22:18:32.711812 1000629 cri.go:89] found id: "9e252ff958f22c644f163926d6bf7b361937414d14e4ab60cf3323e25776ac33"
	I1120 22:18:32.711815 1000629 cri.go:89] found id: ""
	I1120 22:18:32.711865 1000629 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:18:32.723070 1000629 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:18:32Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:18:32.723155 1000629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:18:32.731085 1000629 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:18:32.731105 1000629 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:18:32.731157 1000629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:18:32.738479 1000629 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:18:32.739195 1000629 kubeconfig.go:125] found "pause-236741" server: "https://192.168.85.2:8443"
	I1120 22:18:32.739981 1000629 kapi.go:59] client config for pause-236741: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 22:18:32.740465 1000629 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 22:18:32.740486 1000629 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 22:18:32.740492 1000629 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 22:18:32.740497 1000629 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 22:18:32.740501 1000629 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 22:18:32.740768 1000629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:18:32.749287 1000629 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1120 22:18:32.749323 1000629 kubeadm.go:602] duration metric: took 18.212109ms to restartPrimaryControlPlane
	I1120 22:18:32.749332 1000629 kubeadm.go:403] duration metric: took 65.445252ms to StartCluster
	I1120 22:18:32.749376 1000629 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:18:32.749455 1000629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:18:32.750292 1000629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:18:32.750514 1000629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:18:32.750862 1000629 config.go:182] Loaded profile config "pause-236741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:18:32.750915 1000629 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:18:32.754160 1000629 out.go:179] * Enabled addons: 
	I1120 22:18:32.754169 1000629 out.go:179] * Verifying Kubernetes components...
	I1120 22:18:32.756932 1000629 addons.go:515] duration metric: took 6.005928ms for enable addons: enabled=[]
	I1120 22:18:32.756998 1000629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:18:32.790256  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:32.801014  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:32.801084  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:32.862873  984680 cri.go:89] found id: ""
	I1120 22:18:32.862897  984680 logs.go:282] 0 containers: []
	W1120 22:18:32.862910  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:32.862916  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:32.863044  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:32.899685  984680 cri.go:89] found id: ""
	I1120 22:18:32.899707  984680 logs.go:282] 0 containers: []
	W1120 22:18:32.899716  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:32.899722  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:32.899778  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:32.934715  984680 cri.go:89] found id: ""
	I1120 22:18:32.934737  984680 logs.go:282] 0 containers: []
	W1120 22:18:32.934746  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:32.934752  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:32.934806  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:32.981430  984680 cri.go:89] found id: ""
	I1120 22:18:32.981513  984680 logs.go:282] 0 containers: []
	W1120 22:18:32.981537  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:32.981570  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:32.981650  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:33.037826  984680 cri.go:89] found id: ""
	I1120 22:18:33.037849  984680 logs.go:282] 0 containers: []
	W1120 22:18:33.037857  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:33.037864  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:33.037921  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:33.093444  984680 cri.go:89] found id: ""
	I1120 22:18:33.093466  984680 logs.go:282] 0 containers: []
	W1120 22:18:33.093474  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:33.093481  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:33.093537  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:33.151700  984680 cri.go:89] found id: ""
	I1120 22:18:33.151721  984680 logs.go:282] 0 containers: []
	W1120 22:18:33.151730  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:33.151736  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:33.151792  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:33.208295  984680 cri.go:89] found id: ""
	I1120 22:18:33.208358  984680 logs.go:282] 0 containers: []
	W1120 22:18:33.208382  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:33.208407  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:33.208443  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:33.260943  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:33.264676  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:33.314393  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:33.314418  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:33.486520  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:33.486602  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:33.509637  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:33.509664  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:33.618171  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:36.119153  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:36.139724  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:36.139842  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:36.216161  984680 cri.go:89] found id: ""
	I1120 22:18:36.216229  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.216252  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:36.216281  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:36.216360  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:36.267886  984680 cri.go:89] found id: ""
	I1120 22:18:36.267952  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.267974  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:36.267997  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:36.268074  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:36.318897  984680 cri.go:89] found id: ""
	I1120 22:18:36.318964  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.319011  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:36.319032  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:36.319163  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:36.371602  984680 cri.go:89] found id: ""
	I1120 22:18:36.371670  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.371692  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:36.371714  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:36.371798  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:36.415871  984680 cri.go:89] found id: ""
	I1120 22:18:36.415938  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.415960  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:36.415983  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:36.416060  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:36.464128  984680 cri.go:89] found id: ""
	I1120 22:18:36.464209  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.464238  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:36.464260  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:36.464342  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:36.505319  984680 cri.go:89] found id: ""
	I1120 22:18:36.505399  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.505421  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:36.505452  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:36.505529  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:36.581265  984680 cri.go:89] found id: ""
	I1120 22:18:36.581356  984680 logs.go:282] 0 containers: []
	W1120 22:18:36.581380  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:36.581418  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:36.581454  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:36.732130  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:36.732213  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:36.753689  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:36.753716  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:36.886432  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:36.886499  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:36.886526  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:36.935400  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:36.935444  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:33.117980 1000629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:18:33.171565 1000629 node_ready.go:35] waiting up to 6m0s for node "pause-236741" to be "Ready" ...
	I1120 22:18:37.758329 1000629 node_ready.go:49] node "pause-236741" is "Ready"
	I1120 22:18:37.758361 1000629 node_ready.go:38] duration metric: took 4.586751448s for node "pause-236741" to be "Ready" ...
	I1120 22:18:37.758378 1000629 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:18:37.758438 1000629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:37.775569 1000629 api_server.go:72] duration metric: took 5.025015358s to wait for apiserver process to appear ...
	I1120 22:18:37.775603 1000629 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:18:37.775621 1000629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:18:37.790890 1000629 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 22:18:37.790919 1000629 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 22:18:39.506654  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:39.517276  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:39.517344  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:39.551102  984680 cri.go:89] found id: ""
	I1120 22:18:39.551126  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.551135  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:39.551141  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:39.551201  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:39.582589  984680 cri.go:89] found id: ""
	I1120 22:18:39.582622  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.582631  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:39.582638  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:39.582696  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:39.613586  984680 cri.go:89] found id: ""
	I1120 22:18:39.613610  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.613619  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:39.613626  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:39.613685  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:39.642082  984680 cri.go:89] found id: ""
	I1120 22:18:39.642109  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.642117  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:39.642126  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:39.642200  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:39.670446  984680 cri.go:89] found id: ""
	I1120 22:18:39.670472  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.670480  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:39.670487  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:39.670549  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:39.700154  984680 cri.go:89] found id: ""
	I1120 22:18:39.700181  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.700191  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:39.700197  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:39.700259  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:39.726587  984680 cri.go:89] found id: ""
	I1120 22:18:39.726614  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.726623  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:39.726629  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:39.726688  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:39.758324  984680 cri.go:89] found id: ""
	I1120 22:18:39.758349  984680 logs.go:282] 0 containers: []
	W1120 22:18:39.758359  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:39.758368  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:39.758411  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:39.882985  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:39.883072  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:39.899411  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:39.899445  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:39.971211  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:39.971232  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:39.971244  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:40.010218  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:40.010267  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:38.276409 1000629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:18:38.284900 1000629 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 22:18:38.284927 1000629 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 22:18:38.776312 1000629 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:18:38.784822 1000629 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:18:38.785988 1000629 api_server.go:141] control plane version: v1.34.1
	I1120 22:18:38.786016 1000629 api_server.go:131] duration metric: took 1.010406581s to wait for apiserver health ...
	I1120 22:18:38.786024 1000629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:18:38.790326 1000629 system_pods.go:59] 7 kube-system pods found
	I1120 22:18:38.790366 1000629 system_pods.go:61] "coredns-66bc5c9577-4ssl6" [2e79a16f-633f-4616-87b8-a0d635313169] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:18:38.790375 1000629 system_pods.go:61] "etcd-pause-236741" [de3ca9c3-20fe-43e0-8420-5a7b7d100a82] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:18:38.790380 1000629 system_pods.go:61] "kindnet-gbtj6" [85e46865-a2d3-4037-a84c-4ed172caf51d] Running
	I1120 22:18:38.790387 1000629 system_pods.go:61] "kube-apiserver-pause-236741" [3e1b62e0-86db-4798-bce3-30bd50540f02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:18:38.790394 1000629 system_pods.go:61] "kube-controller-manager-pause-236741" [5c5b61a2-25e9-4daa-b1eb-505512928b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:18:38.790399 1000629 system_pods.go:61] "kube-proxy-bg8b2" [e4b15707-0927-425d-8b96-e3e547526892] Running
	I1120 22:18:38.790404 1000629 system_pods.go:61] "kube-scheduler-pause-236741" [553498d5-ab29-49d0-8282-e57a04beeb0c] Running
	I1120 22:18:38.790414 1000629 system_pods.go:74] duration metric: took 4.384557ms to wait for pod list to return data ...
	I1120 22:18:38.790424 1000629 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:18:38.792907 1000629 default_sa.go:45] found service account: "default"
	I1120 22:18:38.792977 1000629 default_sa.go:55] duration metric: took 2.545871ms for default service account to be created ...
	I1120 22:18:38.793001 1000629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:18:38.797345 1000629 system_pods.go:86] 7 kube-system pods found
	I1120 22:18:38.797429 1000629 system_pods.go:89] "coredns-66bc5c9577-4ssl6" [2e79a16f-633f-4616-87b8-a0d635313169] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:18:38.797454 1000629 system_pods.go:89] "etcd-pause-236741" [de3ca9c3-20fe-43e0-8420-5a7b7d100a82] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:18:38.797495 1000629 system_pods.go:89] "kindnet-gbtj6" [85e46865-a2d3-4037-a84c-4ed172caf51d] Running
	I1120 22:18:38.797522 1000629 system_pods.go:89] "kube-apiserver-pause-236741" [3e1b62e0-86db-4798-bce3-30bd50540f02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:18:38.797542 1000629 system_pods.go:89] "kube-controller-manager-pause-236741" [5c5b61a2-25e9-4daa-b1eb-505512928b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:18:38.797577 1000629 system_pods.go:89] "kube-proxy-bg8b2" [e4b15707-0927-425d-8b96-e3e547526892] Running
	I1120 22:18:38.797600 1000629 system_pods.go:89] "kube-scheduler-pause-236741" [553498d5-ab29-49d0-8282-e57a04beeb0c] Running
	I1120 22:18:38.797621 1000629 system_pods.go:126] duration metric: took 4.601741ms to wait for k8s-apps to be running ...
	I1120 22:18:38.797654 1000629 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:18:38.797747 1000629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:18:38.812018 1000629 system_svc.go:56] duration metric: took 14.367546ms WaitForService to wait for kubelet
	I1120 22:18:38.812096 1000629 kubeadm.go:587] duration metric: took 6.061547126s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:18:38.812158 1000629 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:18:38.817284 1000629 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:18:38.817364 1000629 node_conditions.go:123] node cpu capacity is 2
	I1120 22:18:38.817391 1000629 node_conditions.go:105] duration metric: took 5.214262ms to run NodePressure ...
	I1120 22:18:38.817417 1000629 start.go:242] waiting for startup goroutines ...
	I1120 22:18:38.817450 1000629 start.go:247] waiting for cluster config update ...
	I1120 22:18:38.817476 1000629 start.go:256] writing updated cluster config ...
	I1120 22:18:38.817884 1000629 ssh_runner.go:195] Run: rm -f paused
	I1120 22:18:38.827331 1000629 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:18:38.828106 1000629 kapi.go:59] client config for pause-236741: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/profiles/pause-236741/client.key", CAFile:"/home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 22:18:38.833756 1000629 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4ssl6" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:18:40.839301 1000629 pod_ready.go:104] pod "coredns-66bc5c9577-4ssl6" is not "Ready", error: <nil>
	W1120 22:18:42.840625 1000629 pod_ready.go:104] pod "coredns-66bc5c9577-4ssl6" is not "Ready", error: <nil>
	I1120 22:18:42.579522  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:42.590029  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:42.590102  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:42.617228  984680 cri.go:89] found id: ""
	I1120 22:18:42.617305  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.617328  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:42.617351  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:42.617439  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:42.644046  984680 cri.go:89] found id: ""
	I1120 22:18:42.644109  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.644125  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:42.644131  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:42.644212  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:42.668881  984680 cri.go:89] found id: ""
	I1120 22:18:42.668905  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.668914  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:42.668920  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:42.668980  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:42.695074  984680 cri.go:89] found id: ""
	I1120 22:18:42.695097  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.695105  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:42.695111  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:42.695173  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:42.720549  984680 cri.go:89] found id: ""
	I1120 22:18:42.720626  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.720650  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:42.720665  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:42.720752  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:42.746163  984680 cri.go:89] found id: ""
	I1120 22:18:42.746186  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.746195  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:42.746233  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:42.746312  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:42.773491  984680 cri.go:89] found id: ""
	I1120 22:18:42.773514  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.773522  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:42.773529  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:42.773608  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:42.798521  984680 cri.go:89] found id: ""
	I1120 22:18:42.798544  984680 logs.go:282] 0 containers: []
	W1120 22:18:42.798552  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:42.798593  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:42.798618  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:42.921612  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:42.921650  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:42.938199  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:42.938235  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:43.011621  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:43.011646  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:43.011659  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:43.049412  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:43.049450  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:45.584178  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:45.594605  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:45.594670  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:45.621692  984680 cri.go:89] found id: ""
	I1120 22:18:45.621717  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.621726  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:45.621733  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:45.621806  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:45.654737  984680 cri.go:89] found id: ""
	I1120 22:18:45.654764  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.654773  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:45.654779  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:45.654835  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:45.681366  984680 cri.go:89] found id: ""
	I1120 22:18:45.681403  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.681412  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:45.681420  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:45.681478  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:45.706700  984680 cri.go:89] found id: ""
	I1120 22:18:45.706726  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.706735  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:45.706742  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:45.706886  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:45.732430  984680 cri.go:89] found id: ""
	I1120 22:18:45.732455  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.732464  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:45.732470  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:45.732526  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:45.757983  984680 cri.go:89] found id: ""
	I1120 22:18:45.758058  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.758081  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:45.758117  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:45.758202  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:45.783718  984680 cri.go:89] found id: ""
	I1120 22:18:45.783740  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.783748  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:45.783754  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:45.783812  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:45.808961  984680 cri.go:89] found id: ""
	I1120 22:18:45.809025  984680 logs.go:282] 0 containers: []
	W1120 22:18:45.809047  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:45.809073  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:45.809090  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:45.946124  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:45.946169  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:45.963361  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:45.963400  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:46.028693  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:46.028716  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:46.028730  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:46.066910  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:46.066949  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:45.339032 1000629 pod_ready.go:94] pod "coredns-66bc5c9577-4ssl6" is "Ready"
	I1120 22:18:45.339068 1000629 pod_ready.go:86] duration metric: took 6.505236592s for pod "coredns-66bc5c9577-4ssl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:45.341818 1000629 pod_ready.go:83] waiting for pod "etcd-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:47.347106 1000629 pod_ready.go:94] pod "etcd-pause-236741" is "Ready"
	I1120 22:18:47.347136 1000629 pod_ready.go:86] duration metric: took 2.0052902s for pod "etcd-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:47.349532 1000629 pod_ready.go:83] waiting for pod "kube-apiserver-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:48.857012 1000629 pod_ready.go:94] pod "kube-apiserver-pause-236741" is "Ready"
	I1120 22:18:48.857041 1000629 pod_ready.go:86] duration metric: took 1.507478468s for pod "kube-apiserver-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:48.861181 1000629 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:18:50.866315 1000629 pod_ready.go:104] pod "kube-controller-manager-pause-236741" is not "Ready", error: <nil>
	I1120 22:18:51.367168 1000629 pod_ready.go:94] pod "kube-controller-manager-pause-236741" is "Ready"
	I1120 22:18:51.367197 1000629 pod_ready.go:86] duration metric: took 2.505986512s for pod "kube-controller-manager-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.369670 1000629 pod_ready.go:83] waiting for pod "kube-proxy-bg8b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.374158 1000629 pod_ready.go:94] pod "kube-proxy-bg8b2" is "Ready"
	I1120 22:18:51.374188 1000629 pod_ready.go:86] duration metric: took 4.491857ms for pod "kube-proxy-bg8b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.376460 1000629 pod_ready.go:83] waiting for pod "kube-scheduler-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.537252 1000629 pod_ready.go:94] pod "kube-scheduler-pause-236741" is "Ready"
	I1120 22:18:51.537282 1000629 pod_ready.go:86] duration metric: took 160.798926ms for pod "kube-scheduler-pause-236741" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:18:51.537295 1000629 pod_ready.go:40] duration metric: took 12.709879606s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:18:51.592483 1000629 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:18:51.595556 1000629 out.go:179] * Done! kubectl is now configured to use "pause-236741" cluster and "default" namespace by default
	I1120 22:18:48.600800  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:48.611202  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:48.611273  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:48.643406  984680 cri.go:89] found id: ""
	I1120 22:18:48.643432  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.643440  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:48.643446  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:48.643509  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:48.686260  984680 cri.go:89] found id: ""
	I1120 22:18:48.686286  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.686295  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:48.686301  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:48.686359  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:48.716191  984680 cri.go:89] found id: ""
	I1120 22:18:48.716216  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.716225  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:48.716232  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:48.716289  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:48.742413  984680 cri.go:89] found id: ""
	I1120 22:18:48.742438  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.742447  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:48.742453  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:48.742513  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:48.770171  984680 cri.go:89] found id: ""
	I1120 22:18:48.770193  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.770202  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:48.770208  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:48.770274  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:48.797881  984680 cri.go:89] found id: ""
	I1120 22:18:48.797907  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.797915  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:48.797922  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:48.797982  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:48.824382  984680 cri.go:89] found id: ""
	I1120 22:18:48.824405  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.824415  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:48.824421  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:48.824480  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:48.849885  984680 cri.go:89] found id: ""
	I1120 22:18:48.849910  984680 logs.go:282] 0 containers: []
	W1120 22:18:48.849919  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:48.849928  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:48.849941  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:48.977875  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:48.977915  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:48.994382  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:48.994410  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:49.065336  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1120 22:18:49.065402  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:49.065431  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:49.102035  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:49.102073  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:51.632972  984680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:18:51.653570  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1120 22:18:51.653657  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1120 22:18:51.693396  984680 cri.go:89] found id: ""
	I1120 22:18:51.693420  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.693432  984680 logs.go:284] No container was found matching "kube-apiserver"
	I1120 22:18:51.693439  984680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1120 22:18:51.693505  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1120 22:18:51.731167  984680 cri.go:89] found id: ""
	I1120 22:18:51.731189  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.731198  984680 logs.go:284] No container was found matching "etcd"
	I1120 22:18:51.731211  984680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1120 22:18:51.731267  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1120 22:18:51.765527  984680 cri.go:89] found id: ""
	I1120 22:18:51.765555  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.765564  984680 logs.go:284] No container was found matching "coredns"
	I1120 22:18:51.765570  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1120 22:18:51.765627  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1120 22:18:51.818274  984680 cri.go:89] found id: ""
	I1120 22:18:51.818321  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.818330  984680 logs.go:284] No container was found matching "kube-scheduler"
	I1120 22:18:51.818337  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1120 22:18:51.818407  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1120 22:18:51.860510  984680 cri.go:89] found id: ""
	I1120 22:18:51.860537  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.860570  984680 logs.go:284] No container was found matching "kube-proxy"
	I1120 22:18:51.860578  984680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1120 22:18:51.860649  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1120 22:18:51.894663  984680 cri.go:89] found id: ""
	I1120 22:18:51.894686  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.894695  984680 logs.go:284] No container was found matching "kube-controller-manager"
	I1120 22:18:51.894701  984680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1120 22:18:51.894767  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1120 22:18:51.926854  984680 cri.go:89] found id: ""
	I1120 22:18:51.926880  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.926888  984680 logs.go:284] No container was found matching "kindnet"
	I1120 22:18:51.926894  984680 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1120 22:18:51.926955  984680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1120 22:18:51.954760  984680 cri.go:89] found id: ""
	I1120 22:18:51.954786  984680 logs.go:282] 0 containers: []
	W1120 22:18:51.954794  984680 logs.go:284] No container was found matching "storage-provisioner"
	I1120 22:18:51.954808  984680 logs.go:123] Gathering logs for CRI-O ...
	I1120 22:18:51.954820  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1120 22:18:52.000601  984680 logs.go:123] Gathering logs for container status ...
	I1120 22:18:52.000646  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1120 22:18:52.051225  984680 logs.go:123] Gathering logs for kubelet ...
	I1120 22:18:52.051253  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1120 22:18:52.205529  984680 logs.go:123] Gathering logs for dmesg ...
	I1120 22:18:52.205572  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1120 22:18:52.223072  984680 logs.go:123] Gathering logs for describe nodes ...
	I1120 22:18:52.223104  984680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1120 22:18:52.309846  984680 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	
	
	==> CRI-O <==
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.167410052Z" level=info msg="Started container" PID=2370 containerID=a4a604a24a4c32db44f4b62a5104e2347a70864166bb4eba5bf30105c4e13201 description=kube-system/coredns-66bc5c9577-4ssl6/coredns id=ca18e76c-7240-424a-896e-2de979f96057 name=/runtime.v1.RuntimeService/StartContainer sandboxID=274ac4a7622de0615bedc48a486364c005001a7b18883045bd3c33ee1b3b26af
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.175777668Z" level=info msg="Created container 281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb: kube-system/kube-controller-manager-pause-236741/kube-controller-manager" id=955ec58d-89a3-417b-ab21-8492b3c8db1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.177000937Z" level=info msg="Starting container: 281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb" id=3ae093f0-20c3-4898-8ad3-5970e4aeb5bc name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.186792637Z" level=info msg="Started container" PID=2404 containerID=281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb description=kube-system/kube-controller-manager-pause-236741/kube-controller-manager id=3ae093f0-20c3-4898-8ad3-5970e4aeb5bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec2b88ff7e5b62af01320bf590825b6073c408363543b08ad1c9813ede3ad1b9
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.190015915Z" level=info msg="Created container c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499: kube-system/kube-apiserver-pause-236741/kube-apiserver" id=eff5437a-5668-4edb-b4da-475c86641908 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.197513473Z" level=info msg="Created container 306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00: kube-system/kube-scheduler-pause-236741/kube-scheduler" id=9c79e0b2-f7c9-47a8-b2bd-0a9a67020f75 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.204564632Z" level=info msg="Starting container: c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499" id=5b2b3aef-aa5b-447e-b631-156bd765356f name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.205035687Z" level=info msg="Starting container: 306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00" id=f2b48e6a-3f1a-4059-8279-d88f1d5fa412 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.206754412Z" level=info msg="Started container" PID=2392 containerID=c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499 description=kube-system/kube-apiserver-pause-236741/kube-apiserver id=5b2b3aef-aa5b-447e-b631-156bd765356f name=/runtime.v1.RuntimeService/StartContainer sandboxID=57eefca9a4ec876586dbf2ea1fd1284de9d72d83718f0516abb7eb1522830280
	Nov 20 22:18:33 pause-236741 crio[2075]: time="2025-11-20T22:18:33.213930464Z" level=info msg="Started container" PID=2395 containerID=306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00 description=kube-system/kube-scheduler-pause-236741/kube-scheduler id=f2b48e6a-3f1a-4059-8279-d88f1d5fa412 name=/runtime.v1.RuntimeService/StartContainer sandboxID=506c12662998cdd8f5c23a68e3bc8c9ec1dc7570196fbafd348684b282566994
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.484662668Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.488366257Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.488409687Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.48843413Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.491677553Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.491710956Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.491733086Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.494895752Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.494936188Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.494962962Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.498095564Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.498129575Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.498154076Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.501492785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:18:43 pause-236741 crio[2075]: time="2025-11-20T22:18:43.501527871Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	281b6ca6a9d13       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   ec2b88ff7e5b6       kube-controller-manager-pause-236741   kube-system
	c24841d4aedba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   57eefca9a4ec8       kube-apiserver-pause-236741            kube-system
	306be761b64f9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   506c12662998c       kube-scheduler-pause-236741            kube-system
	a4a604a24a4c3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   274ac4a7622de       coredns-66bc5c9577-4ssl6               kube-system
	c468b960ba6f0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   48d8fe8acbbe1       kindnet-gbtj6                          kube-system
	1560c64f26dfa       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   7db1538639a28       kube-proxy-bg8b2                       kube-system
	8ceea0cc240b9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   224066caa129e       etcd-pause-236741                      kube-system
	3c387221343fc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   274ac4a7622de       coredns-66bc5c9577-4ssl6               kube-system
	9f0c71877dc9b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   7db1538639a28       kube-proxy-bg8b2                       kube-system
	58052be823cbf       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   48d8fe8acbbe1       kindnet-gbtj6                          kube-system
	7e36379b8c3d4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   506c12662998c       kube-scheduler-pause-236741            kube-system
	c3511d0b77176       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   57eefca9a4ec8       kube-apiserver-pause-236741            kube-system
	6bf0157c5e580       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   ec2b88ff7e5b6       kube-controller-manager-pause-236741   kube-system
	9e252ff958f22       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   224066caa129e       etcd-pause-236741                      kube-system
	
	
	==> coredns [3c387221343fc267293874d0cc25d9f5fba82bd20373e7422a0706579c53966f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35147 - 57121 "HINFO IN 3839960908849394579.3685477291472904481. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014040829s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4a604a24a4c32db44f4b62a5104e2347a70864166bb4eba5bf30105c4e13201] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42202 - 21341 "HINFO IN 1381893881431481202.4857600648861538686. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032907083s
	
	
	==> describe nodes <==
	Name:               pause-236741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-236741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=pause-236741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_17_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:17:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-236741
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:18:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:18:45 +0000   Thu, 20 Nov 2025 22:17:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:18:45 +0000   Thu, 20 Nov 2025 22:17:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:18:45 +0000   Thu, 20 Nov 2025 22:17:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:18:45 +0000   Thu, 20 Nov 2025 22:18:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-236741
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                523e06a0-3ec3-47af-bbb9-b7381baa2345
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4ssl6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-236741                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-gbtj6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-236741             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-236741    200m (10%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-bg8b2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-236741             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Normal   NodeHasSufficientPID     92s (x8 over 92s)  kubelet          Node pause-236741 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-236741 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-236741 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 92s                kubelet          Starting kubelet.
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-236741 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-236741 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-236741 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-236741 event: Registered Node pause-236741 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-236741 status is now: NodeReady
	  Normal   RegisteredNode           17s                node-controller  Node pause-236741 event: Registered Node pause-236741 in Controller
	
	
	==> dmesg <==
	[Nov20 21:39] overlayfs: idmapped layers are currently not supported
	[Nov20 21:41] overlayfs: idmapped layers are currently not supported
	[Nov20 21:46] overlayfs: idmapped layers are currently not supported
	[  +2.922279] overlayfs: idmapped layers are currently not supported
	[Nov20 21:48] overlayfs: idmapped layers are currently not supported
	[Nov20 21:52] overlayfs: idmapped layers are currently not supported
	[Nov20 21:54] overlayfs: idmapped layers are currently not supported
	[Nov20 21:59] overlayfs: idmapped layers are currently not supported
	[Nov20 22:00] overlayfs: idmapped layers are currently not supported
	[Nov20 22:01] overlayfs: idmapped layers are currently not supported
	[Nov20 22:02] overlayfs: idmapped layers are currently not supported
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8ceea0cc240b99fe15d8cac6aacce8187742305096eab5d78f2ca6a5cec87c90] <==
	{"level":"warn","ts":"2025-11-20T22:18:35.865454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.885483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.908035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.923190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.944700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.977287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:35.991185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.023469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.039639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.068091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.085087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.104033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.131916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.158338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.190726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.234663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.266612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.304776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.339179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.373547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.440004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.468974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.498821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.522887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:18:36.727336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58006","server-name":"","error":"EOF"}
	
	
	==> etcd [9e252ff958f22c644f163926d6bf7b361937414d14e4ab60cf3323e25776ac33] <==
	{"level":"warn","ts":"2025-11-20T22:17:29.667195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.683071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.705183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.738818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.797463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.809804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:17:29.866937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57164","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T22:18:24.510501Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-20T22:18:24.510554Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-236741","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-20T22:18:24.510648Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-20T22:18:24.784112Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-20T22:18:24.784198Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T22:18:24.784219Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-20T22:18:24.784274Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-20T22:18:24.784352Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-20T22:18:24.784349Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-20T22:18:24.784379Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-20T22:18:24.784386Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-20T22:18:24.784423Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-20T22:18:24.784431Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-20T22:18:24.784437Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T22:18:24.787676Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-20T22:18:24.787762Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T22:18:24.787793Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T22:18:24.787800Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-236741","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 22:18:57 up  5:01,  0 user,  load average: 2.89, 2.73, 2.07
	Linux pause-236741 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [58052be823cbf5d2cb1b7278e73604249f66a05273becbd8e1db08315c2828ad] <==
	I1120 22:17:39.219435       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:17:39.219884       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:17:39.220061       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:17:39.220103       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:17:39.220139       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:17:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:17:39.411444       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:17:39.411519       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:17:39.411552       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:17:39.503786       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:18:09.412361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:18:09.504015       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:18:09.504141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:18:09.504234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 22:18:10.704301       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:18:10.704365       1 metrics.go:72] Registering metrics
	I1120 22:18:10.704436       1 controller.go:711] "Syncing nftables rules"
	I1120 22:18:19.418235       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:18:19.418398       1 main.go:301] handling current node
	
	
	==> kindnet [c468b960ba6f0f4b556950a20799939d1b5d15055220c3912c73be316d71ea48] <==
	I1120 22:18:33.241144       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:18:33.243492       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:18:33.243635       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:18:33.243649       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:18:33.243678       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:18:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:18:33.484340       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:18:33.484444       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:18:33.484477       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:18:33.485254       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:18:37.829401       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:18:37.829570       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:18:37.829659       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 22:18:37.829743       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1120 22:18:39.284776       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:18:39.284885       1 metrics.go:72] Registering metrics
	I1120 22:18:39.284960       1 controller.go:711] "Syncing nftables rules"
	I1120 22:18:43.484270       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:18:43.484310       1 main.go:301] handling current node
	I1120 22:18:53.484622       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:18:53.484680       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c24841d4aedba96f3657d3c1cd050405cb054a258ab72633179d5dfe858ee499] <==
	I1120 22:18:37.846086       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 22:18:37.846117       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 22:18:37.846241       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 22:18:37.846299       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 22:18:37.851075       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 22:18:37.851164       1 policy_source.go:240] refreshing policies
	I1120 22:18:37.854475       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 22:18:37.856269       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 22:18:37.856745       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 22:18:37.856811       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:18:37.857089       1 aggregator.go:171] initial CRD sync complete...
	I1120 22:18:37.857309       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 22:18:37.857337       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:18:37.857365       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:18:37.857154       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:18:37.869065       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:18:37.880218       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1120 22:18:37.895604       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 22:18:37.909383       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:18:38.449218       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:18:38.795943       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:18:40.341874       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:18:40.441630       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:18:40.490519       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:18:40.592270       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [c3511d0b771763187a5bc3795736cf83741f9ce4ddc7e64d0cecd65f6e18a4db] <==
	W1120 22:18:24.524228       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524283       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524303       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524372       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524436       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524502       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524572       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524631       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524702       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524767       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524846       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.524925       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.525646       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.525802       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.525902       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526024       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526136       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526218       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526274       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526352       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526425       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526552       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526659       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.526741       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1120 22:18:24.528948       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [281b6ca6a9d138dc1796e75589468c438f4c9f72821152ad2b8ecdd19f9a99cb] <==
	I1120 22:18:40.188351       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 22:18:40.193528       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 22:18:40.193659       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 22:18:40.193712       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 22:18:40.193761       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 22:18:40.193791       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 22:18:40.193812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:18:40.193833       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:18:40.193840       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:18:40.193915       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:18:40.198235       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 22:18:40.202403       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 22:18:40.205171       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 22:18:40.212661       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 22:18:40.213964       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:18:40.217127       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:18:40.232765       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 22:18:40.234028       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 22:18:40.234081       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:18:40.234175       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:18:40.234259       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 22:18:40.234834       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 22:18:40.234861       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:18:40.235495       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 22:18:40.243794       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-controller-manager [6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66] <==
	I1120 22:17:37.626766       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 22:17:37.629010       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 22:17:37.632280       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 22:17:37.632351       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 22:17:37.632394       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 22:17:37.632400       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 22:17:37.632406       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 22:17:37.642790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:17:37.642935       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-236741" podCIDRs=["10.244.0.0/24"]
	I1120 22:17:37.647071       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 22:17:37.655393       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:17:37.662459       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:17:37.664335       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 22:17:37.664449       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 22:17:37.664465       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 22:17:37.664738       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 22:17:37.664838       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 22:17:37.664850       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:17:37.664863       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 22:17:37.668391       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 22:17:37.671589       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:17:37.675115       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:17:37.675140       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:17:37.675150       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:18:22.616888       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1560c64f26dfacbde83eecc300320a5b84c302efea1b1ce06d936589c5c29a96] <==
	I1120 22:18:34.921974       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:18:36.485655       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1120 22:18:37.832889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-236741\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1120 22:18:38.886328       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:18:38.886399       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:18:38.886537       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:18:38.930617       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:18:38.930753       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:18:38.937555       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:18:38.937950       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:18:38.938143       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:18:38.939512       1 config.go:200] "Starting service config controller"
	I1120 22:18:38.943828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:18:38.939616       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:18:38.943982       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:18:38.945149       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 22:18:38.940293       1 config.go:309] "Starting node config controller"
	I1120 22:18:38.945272       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:18:38.945307       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:18:38.939630       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:18:38.945366       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:18:38.945393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:18:39.045010       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [9f0c71877dc9b95ffc1e640d923eae9a1f572ce5667f3ce16d8c165e843a5eb3] <==
	I1120 22:17:39.168235       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:17:39.305102       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:17:39.411665       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:17:39.411839       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:17:39.411944       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:17:39.607707       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:17:39.607761       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:17:39.611898       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:17:39.612204       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:17:39.612274       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:17:39.616049       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:17:39.616069       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:17:39.616357       1 config.go:200] "Starting service config controller"
	I1120 22:17:39.616371       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:17:39.616664       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:17:39.616679       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:17:39.617055       1 config.go:309] "Starting node config controller"
	I1120 22:17:39.617077       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:17:39.617084       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:17:39.717509       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:17:39.718378       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 22:17:39.718395       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [306be761b64f92f12723a09bd4e37c5668d09f748f3845c0914d328ef2ba3f00] <==
	I1120 22:18:36.064283       1 serving.go:386] Generated self-signed cert in-memory
	I1120 22:18:38.328942       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:18:38.329115       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:18:38.335159       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 22:18:38.335358       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 22:18:38.335439       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:18:38.335475       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:18:38.335520       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:18:38.335552       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:18:38.336659       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:18:38.336739       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 22:18:38.435801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:18:38.435931       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 22:18:38.436030       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [7e36379b8c3d46ef6b0a620644bc9c41cc65c59a2f47b7a11d658e4590de5911] <==
	E1120 22:17:30.720563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 22:17:30.720483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 22:17:30.723097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 22:17:31.611265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 22:17:31.613586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 22:17:31.653130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 22:17:31.656347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 22:17:31.657345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 22:17:31.692649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 22:17:31.725664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 22:17:31.731242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 22:17:31.799212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 22:17:31.816624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 22:17:31.848685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 22:17:31.869990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 22:17:31.872407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 22:17:31.990537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 22:17:32.003320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1120 22:17:34.588055       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:18:24.512376       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1120 22:18:24.512477       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1120 22:18:24.512490       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1120 22:18:24.512507       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:18:24.512732       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1120 22:18:24.512748       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.008034    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5f01ce7a736f51f15bbab27dfff545a1" pod="kube-system/kube-scheduler-pause-236741"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.008533    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gbtj6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85e46865-a2d3-4037-a84c-4ed172caf51d" pod="kube-system/kindnet-gbtj6"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.008944    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bg8b2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e4b15707-0927-425d-8b96-e3e547526892" pod="kube-system/kube-proxy-bg8b2"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: I1120 22:18:33.013384    1323 scope.go:117] "RemoveContainer" containerID="6bf0157c5e58049b0c8e654b9aad876ccfe2925b6377f6a85f6f87a79d216d66"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.013982    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5f01ce7a736f51f15bbab27dfff545a1" pod="kube-system/kube-scheduler-pause-236741"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.014744    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gbtj6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85e46865-a2d3-4037-a84c-4ed172caf51d" pod="kube-system/kindnet-gbtj6"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.015135    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bg8b2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e4b15707-0927-425d-8b96-e3e547526892" pod="kube-system/kube-proxy-bg8b2"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.015562    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-4ssl6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2e79a16f-633f-4616-87b8-a0d635313169" pod="kube-system/coredns-66bc5c9577-4ssl6"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.015934    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="65382a64e4a7b502e66482a2d869a89c" pod="kube-system/etcd-pause-236741"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.016298    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bb68a040a67dbc228ff2646329c0fe18" pod="kube-system/kube-apiserver-pause-236741"
	Nov 20 22:18:33 pause-236741 kubelet[1323]: E1120 22:18:33.016629    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-236741\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4984d56f3bf65bc5533b99a2aff01656" pod="kube-system/kube-controller-manager-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.734742    1323 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-236741\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.735519    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="4984d56f3bf65bc5533b99a2aff01656" pod="kube-system/kube-controller-manager-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.736639    1323 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-236741\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.769652    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="5f01ce7a736f51f15bbab27dfff545a1" pod="kube-system/kube-scheduler-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.795776    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-gbtj6\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="85e46865-a2d3-4037-a84c-4ed172caf51d" pod="kube-system/kindnet-gbtj6"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.804171    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-bg8b2\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="e4b15707-0927-425d-8b96-e3e547526892" pod="kube-system/kube-proxy-bg8b2"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.815657    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-4ssl6\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="2e79a16f-633f-4616-87b8-a0d635313169" pod="kube-system/coredns-66bc5c9577-4ssl6"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.823295    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="65382a64e4a7b502e66482a2d869a89c" pod="kube-system/etcd-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.828362    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="bb68a040a67dbc228ff2646329c0fe18" pod="kube-system/kube-apiserver-pause-236741"
	Nov 20 22:18:37 pause-236741 kubelet[1323]: E1120 22:18:37.832019    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-236741\" is forbidden: User \"system:node:pause-236741\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-236741' and this object" podUID="5f01ce7a736f51f15bbab27dfff545a1" pod="kube-system/kube-scheduler-pause-236741"
	Nov 20 22:18:43 pause-236741 kubelet[1323]: W1120 22:18:43.846731    1323 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 20 22:18:52 pause-236741 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:18:52 pause-236741 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:18:52 pause-236741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-236741 -n pause-236741
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-236741 -n pause-236741: exit status 2 (425.004013ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-236741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (273.038939ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:22:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-443192 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-443192 describe deploy/metrics-server -n kube-system: exit status 1 (87.463945ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-443192 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-443192
helpers_test.go:243: (dbg) docker inspect old-k8s-version-443192:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753",
	        "Created": "2025-11-20T22:21:23.635114568Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1017680,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:21:23.704687567Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/hostname",
	        "HostsPath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/hosts",
	        "LogPath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753-json.log",
	        "Name": "/old-k8s-version-443192",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-443192:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-443192",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753",
	                "LowerDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-443192",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-443192/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-443192",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-443192",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-443192",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd17061bebf1c3d9f13b7bbaa9cfc7147d461dd4038e58c80a75883a838b9a7e",
	            "SandboxKey": "/var/run/docker/netns/dd17061bebf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34157"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34158"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34161"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34159"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34160"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-443192": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:aa:43:70:53:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be8765199279f8eee237afe7c8b9f46458c0018ce58bf28750fa9832048503b9",
	                    "EndpointID": "d5bbb1d3226d91b9f12a4549f84ab4a4d00759de111d66470ae44f674da6d556",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-443192",
	                        "947acc53b1a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-443192 -n old-k8s-version-443192
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-443192 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-443192 logs -n 25: (1.263088083s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-640880 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo containerd config dump                                                                                                                                                                                                  │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo crio config                                                                                                                                                                                                             │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ delete  │ -p cilium-640880                                                                                                                                                                                                                              │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │ 20 Nov 25 22:19 UTC │
	│ start   │ -p force-systemd-env-833370 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-833370  │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │ 20 Nov 25 22:20 UTC │
	│ delete  │ -p kubernetes-upgrade-410652                                                                                                                                                                                                                  │ kubernetes-upgrade-410652 │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420078    │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ delete  │ -p force-systemd-env-833370                                                                                                                                                                                                                   │ force-systemd-env-833370  │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-options-961311 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ cert-options-961311 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ -p cert-options-961311 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ delete  │ -p cert-options-961311                                                                                                                                                                                                                        │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:21:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:21:17.659765 1017300 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:21:17.660022 1017300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:21:17.660037 1017300 out.go:374] Setting ErrFile to fd 2...
	I1120 22:21:17.660042 1017300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:21:17.660338 1017300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:21:17.660841 1017300 out.go:368] Setting JSON to false
	I1120 22:21:17.661948 1017300 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18203,"bootTime":1763659075,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:21:17.662766 1017300 start.go:143] virtualization:  
	I1120 22:21:17.666719 1017300 out.go:179] * [old-k8s-version-443192] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:21:17.671107 1017300 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:21:17.671197 1017300 notify.go:221] Checking for updates...
	I1120 22:21:17.677678 1017300 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:21:17.680873 1017300 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:21:17.684340 1017300 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:21:17.688426 1017300 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:21:17.691600 1017300 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:21:17.695085 1017300 config.go:182] Loaded profile config "cert-expiration-420078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:21:17.695208 1017300 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:21:17.723141 1017300 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:21:17.723278 1017300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:21:17.790876 1017300 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:21:17.780767848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:21:17.791023 1017300 docker.go:319] overlay module found
	I1120 22:21:17.796007 1017300 out.go:179] * Using the docker driver based on user configuration
	I1120 22:21:17.798907 1017300 start.go:309] selected driver: docker
	I1120 22:21:17.798929 1017300 start.go:930] validating driver "docker" against <nil>
	I1120 22:21:17.798945 1017300 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:21:17.799836 1017300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:21:17.866931 1017300 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:21:17.857624782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:21:17.867186 1017300 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 22:21:17.867421 1017300 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:21:17.870420 1017300 out.go:179] * Using Docker driver with root privileges
	I1120 22:21:17.873433 1017300 cni.go:84] Creating CNI manager for ""
	I1120 22:21:17.873501 1017300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:21:17.873516 1017300 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 22:21:17.873593 1017300 start.go:353] cluster config:
	{Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:21:17.876804 1017300 out.go:179] * Starting "old-k8s-version-443192" primary control-plane node in "old-k8s-version-443192" cluster
	I1120 22:21:17.879677 1017300 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:21:17.882691 1017300 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:21:17.885528 1017300 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 22:21:17.885575 1017300 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1120 22:21:17.885590 1017300 cache.go:65] Caching tarball of preloaded images
	I1120 22:21:17.885679 1017300 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:21:17.885697 1017300 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1120 22:21:17.885815 1017300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/config.json ...
	I1120 22:21:17.885837 1017300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/config.json: {Name:mk8d310c398bac15499809853df3d4ff978fa034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:21:17.886005 1017300 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:21:17.905707 1017300 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:21:17.905739 1017300 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:21:17.905758 1017300 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:21:17.905784 1017300 start.go:360] acquireMachinesLock for old-k8s-version-443192: {Name:mk170647942fc2bf46e44d6cf36b5ae812935bb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:21:17.905966 1017300 start.go:364] duration metric: took 107.825µs to acquireMachinesLock for "old-k8s-version-443192"
	I1120 22:21:17.905999 1017300 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:21:17.906084 1017300 start.go:125] createHost starting for "" (driver="docker")
	I1120 22:21:17.909527 1017300 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 22:21:17.909776 1017300 start.go:159] libmachine.API.Create for "old-k8s-version-443192" (driver="docker")
	I1120 22:21:17.909821 1017300 client.go:173] LocalClient.Create starting
	I1120 22:21:17.909896 1017300 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem
	I1120 22:21:17.909936 1017300 main.go:143] libmachine: Decoding PEM data...
	I1120 22:21:17.909953 1017300 main.go:143] libmachine: Parsing certificate...
	I1120 22:21:17.910012 1017300 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem
	I1120 22:21:17.910032 1017300 main.go:143] libmachine: Decoding PEM data...
	I1120 22:21:17.910042 1017300 main.go:143] libmachine: Parsing certificate...
	I1120 22:21:17.910423 1017300 cli_runner.go:164] Run: docker network inspect old-k8s-version-443192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 22:21:17.927188 1017300 cli_runner.go:211] docker network inspect old-k8s-version-443192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 22:21:17.927273 1017300 network_create.go:284] running [docker network inspect old-k8s-version-443192] to gather additional debugging logs...
	I1120 22:21:17.927296 1017300 cli_runner.go:164] Run: docker network inspect old-k8s-version-443192
	W1120 22:21:17.947637 1017300 cli_runner.go:211] docker network inspect old-k8s-version-443192 returned with exit code 1
	I1120 22:21:17.947670 1017300 network_create.go:287] error running [docker network inspect old-k8s-version-443192]: docker network inspect old-k8s-version-443192: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-443192 not found
	I1120 22:21:17.947684 1017300 network_create.go:289] output of [docker network inspect old-k8s-version-443192]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-443192 not found
	
	** /stderr **
	I1120 22:21:17.947787 1017300 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:21:17.964656 1017300 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad232b357b1b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:e5:2b:94:2e:bb} reservation:<nil>}
	I1120 22:21:17.965068 1017300 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6d47b47b5eb7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:61:6b:56:c9:db} reservation:<nil>}
	I1120 22:21:17.965323 1017300 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8999df1e8509 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:04:87:b7:55:e1} reservation:<nil>}
	I1120 22:21:17.965605 1017300 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1745d0e70cac IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:18:6e:82:3e:69} reservation:<nil>}
	I1120 22:21:17.966039 1017300 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e1180}
	I1120 22:21:17.966063 1017300 network_create.go:124] attempt to create docker network old-k8s-version-443192 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1120 22:21:17.966131 1017300 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-443192 old-k8s-version-443192
	I1120 22:21:18.051147 1017300 network_create.go:108] docker network old-k8s-version-443192 192.168.85.0/24 created
	I1120 22:21:18.051181 1017300 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-443192" container
	I1120 22:21:18.051266 1017300 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 22:21:18.068677 1017300 cli_runner.go:164] Run: docker volume create old-k8s-version-443192 --label name.minikube.sigs.k8s.io=old-k8s-version-443192 --label created_by.minikube.sigs.k8s.io=true
	I1120 22:21:18.087444 1017300 oci.go:103] Successfully created a docker volume old-k8s-version-443192
	I1120 22:21:18.087546 1017300 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-443192-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-443192 --entrypoint /usr/bin/test -v old-k8s-version-443192:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 22:21:18.608979 1017300 oci.go:107] Successfully prepared a docker volume old-k8s-version-443192
	I1120 22:21:18.609061 1017300 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 22:21:18.609074 1017300 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 22:21:18.609153 1017300 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-443192:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 22:21:23.562360 1017300 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-443192:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.953165477s)
	I1120 22:21:23.562408 1017300 kic.go:203] duration metric: took 4.953331091s to extract preloaded images to volume ...
	W1120 22:21:23.562553 1017300 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 22:21:23.562669 1017300 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 22:21:23.616311 1017300 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-443192 --name old-k8s-version-443192 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-443192 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-443192 --network old-k8s-version-443192 --ip 192.168.85.2 --volume old-k8s-version-443192:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 22:21:23.933935 1017300 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Running}}
	I1120 22:21:23.973644 1017300 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:21:23.994174 1017300 cli_runner.go:164] Run: docker exec old-k8s-version-443192 stat /var/lib/dpkg/alternatives/iptables
	I1120 22:21:24.047065 1017300 oci.go:144] the created container "old-k8s-version-443192" has a running status.
	I1120 22:21:24.047098 1017300 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa...
	I1120 22:21:24.613053 1017300 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 22:21:24.643983 1017300 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:21:24.661790 1017300 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 22:21:24.661825 1017300 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-443192 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 22:21:24.706413 1017300 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:21:24.725454 1017300 machine.go:94] provisionDockerMachine start ...
	I1120 22:21:24.725601 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:21:24.743826 1017300 main.go:143] libmachine: Using SSH client type: native
	I1120 22:21:24.744173 1017300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1120 22:21:24.744191 1017300 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:21:24.744919 1017300 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:21:27.894450 1017300 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-443192
	
	I1120 22:21:27.894518 1017300 ubuntu.go:182] provisioning hostname "old-k8s-version-443192"
	I1120 22:21:27.894618 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:21:27.912969 1017300 main.go:143] libmachine: Using SSH client type: native
	I1120 22:21:27.913288 1017300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1120 22:21:27.913300 1017300 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-443192 && echo "old-k8s-version-443192" | sudo tee /etc/hostname
	I1120 22:21:28.069058 1017300 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-443192
	
	I1120 22:21:28.069184 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:21:28.089162 1017300 main.go:143] libmachine: Using SSH client type: native
	I1120 22:21:28.089509 1017300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1120 22:21:28.089536 1017300 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-443192' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-443192/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-443192' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:21:28.243701 1017300 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:21:28.243730 1017300 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:21:28.243758 1017300 ubuntu.go:190] setting up certificates
	I1120 22:21:28.243768 1017300 provision.go:84] configureAuth start
	I1120 22:21:28.243844 1017300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-443192
	I1120 22:21:28.261430 1017300 provision.go:143] copyHostCerts
	I1120 22:21:28.261500 1017300 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:21:28.261512 1017300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:21:28.261592 1017300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:21:28.261700 1017300 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:21:28.261714 1017300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:21:28.261746 1017300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:21:28.261814 1017300 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:21:28.261835 1017300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:21:28.261862 1017300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:21:28.261917 1017300 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-443192 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-443192]
	I1120 22:21:29.309810 1017300 provision.go:177] copyRemoteCerts
	I1120 22:21:29.309952 1017300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:21:29.310029 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:21:29.332313 1017300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:21:29.435055 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 22:21:29.454560 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:21:29.473345 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1120 22:21:29.491750 1017300 provision.go:87] duration metric: took 1.247954328s to configureAuth
	I1120 22:21:29.491781 1017300 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:21:29.492015 1017300 config.go:182] Loaded profile config "old-k8s-version-443192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 22:21:29.492150 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:21:29.511326 1017300 main.go:143] libmachine: Using SSH client type: native
	I1120 22:21:29.511648 1017300 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1120 22:21:29.511670 1017300 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:21:29.818064 1017300 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:21:29.818087 1017300 machine.go:97] duration metric: took 5.092599566s to provisionDockerMachine
	I1120 22:21:29.818097 1017300 client.go:176] duration metric: took 11.9082656s to LocalClient.Create
	I1120 22:21:29.818114 1017300 start.go:167] duration metric: took 11.908339955s to libmachine.API.Create "old-k8s-version-443192"
	I1120 22:21:29.818121 1017300 start.go:293] postStartSetup for "old-k8s-version-443192" (driver="docker")
	I1120 22:21:29.818132 1017300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:21:29.818198 1017300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:21:29.818243 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:21:29.838260 1017300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:21:29.939249 1017300 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:21:29.942596 1017300 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:21:29.942625 1017300 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:21:29.942638 1017300 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:21:29.942696 1017300 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:21:29.942791 1017300 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:21:29.942902 1017300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:21:29.950675 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:21:29.969230 1017300 start.go:296] duration metric: took 151.0855ms for postStartSetup
	I1120 22:21:29.969650 1017300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-443192
	I1120 22:21:29.987167 1017300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/config.json ...
	I1120 22:21:29.987562 1017300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:21:29.987615 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:21:30.027581 1017300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:21:30.158270 1017300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:21:30.164271 1017300 start.go:128] duration metric: took 12.258169376s to createHost
	I1120 22:21:30.164296 1017300 start.go:83] releasing machines lock for "old-k8s-version-443192", held for 12.258315404s
	I1120 22:21:30.164369 1017300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-443192
	I1120 22:21:30.181526 1017300 ssh_runner.go:195] Run: cat /version.json
	I1120 22:21:30.181539 1017300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:21:30.181583 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:21:30.181600 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:21:30.201363 1017300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:21:30.219709 1017300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:21:30.398255 1017300 ssh_runner.go:195] Run: systemctl --version
	I1120 22:21:30.404956 1017300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:21:30.444281 1017300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:21:30.449137 1017300 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:21:30.449237 1017300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:21:30.479970 1017300 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 22:21:30.480049 1017300 start.go:496] detecting cgroup driver to use...
	I1120 22:21:30.480088 1017300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:21:30.480148 1017300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:21:30.498313 1017300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:21:30.513096 1017300 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:21:30.513177 1017300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:21:30.531167 1017300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:21:30.550181 1017300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:21:30.677798 1017300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:21:30.801528 1017300 docker.go:234] disabling docker service ...
	I1120 22:21:30.801609 1017300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:21:30.832392 1017300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:21:30.845990 1017300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:21:30.979713 1017300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:21:31.106673 1017300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:21:31.121139 1017300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:21:31.137252 1017300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1120 22:21:31.137322 1017300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:21:31.147508 1017300 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:21:31.147637 1017300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:21:31.158400 1017300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:21:31.168686 1017300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:21:31.179177 1017300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:21:31.188268 1017300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:21:31.197255 1017300 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:21:31.211009 1017300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:21:31.220253 1017300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:21:31.228211 1017300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:21:31.235996 1017300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:21:31.347657 1017300 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:21:31.513753 1017300 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:21:31.513841 1017300 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:21:31.518171 1017300 start.go:564] Will wait 60s for crictl version
	I1120 22:21:31.518283 1017300 ssh_runner.go:195] Run: which crictl
	I1120 22:21:31.522255 1017300 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:21:31.549806 1017300 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:21:31.549956 1017300 ssh_runner.go:195] Run: crio --version
	I1120 22:21:31.579149 1017300 ssh_runner.go:195] Run: crio --version
	I1120 22:21:31.616364 1017300 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1120 22:21:31.619349 1017300 cli_runner.go:164] Run: docker network inspect old-k8s-version-443192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:21:31.635012 1017300 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:21:31.638909 1017300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:21:31.649339 1017300 kubeadm.go:884] updating cluster {Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:21:31.649464 1017300 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 22:21:31.649520 1017300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:21:31.683393 1017300 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:21:31.683420 1017300 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:21:31.683476 1017300 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:21:31.712530 1017300 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:21:31.712553 1017300 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:21:31.712562 1017300 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1120 22:21:31.712701 1017300 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-443192 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:21:31.712798 1017300 ssh_runner.go:195] Run: crio config
	I1120 22:21:31.770149 1017300 cni.go:84] Creating CNI manager for ""
	I1120 22:21:31.770174 1017300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:21:31.770194 1017300 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:21:31.770218 1017300 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-443192 NodeName:old-k8s-version-443192 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:21:31.770355 1017300 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-443192"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:21:31.770433 1017300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1120 22:21:31.778406 1017300 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:21:31.778477 1017300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:21:31.786103 1017300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1120 22:21:31.799313 1017300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:21:31.815696 1017300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1120 22:21:31.830267 1017300 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:21:31.833970 1017300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:21:31.843593 1017300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:21:31.956729 1017300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:21:31.979630 1017300 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192 for IP: 192.168.85.2
	I1120 22:21:31.979657 1017300 certs.go:195] generating shared ca certs ...
	I1120 22:21:31.979674 1017300 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:21:31.979858 1017300 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:21:31.979923 1017300 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:21:31.979938 1017300 certs.go:257] generating profile certs ...
	I1120 22:21:31.980018 1017300 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.key
	I1120 22:21:31.980036 1017300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt with IP's: []
	I1120 22:21:33.490606 1017300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt ...
	I1120 22:21:33.490639 1017300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: {Name:mk02ab7b2cf5bea6dbb2e3abe45c82f6151743d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:21:33.490896 1017300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.key ...
	I1120 22:21:33.490914 1017300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.key: {Name:mkfb57526a82c299dbf5e6d602569b3f37913883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:21:33.491076 1017300 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key.3493d06e
	I1120 22:21:33.491096 1017300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.crt.3493d06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1120 22:21:33.555715 1017300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.crt.3493d06e ...
	I1120 22:21:33.555743 1017300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.crt.3493d06e: {Name:mkd8ce175cb2e1114f6802fdc1d3e19f44064304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:21:33.555944 1017300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key.3493d06e ...
	I1120 22:21:33.555962 1017300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key.3493d06e: {Name:mk3be1ae413330dd008909d6ef4ff016962df80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:21:33.556053 1017300 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.crt.3493d06e -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.crt
	I1120 22:21:33.556148 1017300 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key.3493d06e -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key
	I1120 22:21:33.556201 1017300 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.key
	I1120 22:21:33.556213 1017300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.crt with IP's: []
	I1120 22:21:34.005704 1017300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.crt ...
	I1120 22:21:34.005753 1017300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.crt: {Name:mke5b45050bcc2d8c70aea88b02a885abe073a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:21:34.005974 1017300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.key ...
	I1120 22:21:34.005989 1017300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.key: {Name:mkadab013e678aa8731dfbd08d2713fa461786b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:21:34.006208 1017300 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:21:34.006259 1017300 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:21:34.006271 1017300 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:21:34.006297 1017300 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:21:34.006320 1017300 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:21:34.006360 1017300 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:21:34.006403 1017300 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:21:34.007200 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:21:34.030836 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:21:34.053054 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:21:34.073202 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:21:34.093203 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 22:21:34.111388 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 22:21:34.129855 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:21:34.147875 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:21:34.166304 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:21:34.185218 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:21:34.211053 1017300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:21:34.233808 1017300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:21:34.250326 1017300 ssh_runner.go:195] Run: openssl version
	I1120 22:21:34.257596 1017300 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:21:34.271080 1017300 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:21:34.279197 1017300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:21:34.283098 1017300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:21:34.283258 1017300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:21:34.324981 1017300 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:21:34.332606 1017300 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8368522.pem /etc/ssl/certs/3ec20f2e.0
	I1120 22:21:34.340305 1017300 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:21:34.348115 1017300 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:21:34.356702 1017300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:21:34.361005 1017300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:21:34.361113 1017300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:21:34.407134 1017300 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:21:34.414707 1017300 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 22:21:34.422657 1017300 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:21:34.430344 1017300 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:21:34.437709 1017300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:21:34.441425 1017300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:21:34.441544 1017300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:21:34.483653 1017300 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:21:34.491353 1017300 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/836852.pem /etc/ssl/certs/51391683.0
	I1120 22:21:34.498749 1017300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:21:34.502570 1017300 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 22:21:34.502678 1017300 kubeadm.go:401] StartCluster: {Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:21:34.502773 1017300 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:21:34.502834 1017300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:21:34.530354 1017300 cri.go:89] found id: ""
	I1120 22:21:34.530436 1017300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:21:34.538477 1017300 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 22:21:34.546748 1017300 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 22:21:34.546874 1017300 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 22:21:34.555049 1017300 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 22:21:34.555084 1017300 kubeadm.go:158] found existing configuration files:
	
	I1120 22:21:34.555176 1017300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 22:21:34.563882 1017300 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 22:21:34.563973 1017300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 22:21:34.571713 1017300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 22:21:34.579681 1017300 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 22:21:34.579824 1017300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 22:21:34.587554 1017300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 22:21:34.595366 1017300 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 22:21:34.595476 1017300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 22:21:34.603093 1017300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 22:21:34.611077 1017300 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 22:21:34.611154 1017300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 22:21:34.620336 1017300 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 22:21:34.734145 1017300 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 22:21:34.829326 1017300 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 22:21:49.794355 1017300 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1120 22:21:49.794420 1017300 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 22:21:49.794511 1017300 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 22:21:49.794578 1017300 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 22:21:49.794627 1017300 kubeadm.go:319] OS: Linux
	I1120 22:21:49.794683 1017300 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 22:21:49.794739 1017300 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 22:21:49.794793 1017300 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 22:21:49.794847 1017300 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 22:21:49.794902 1017300 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 22:21:49.794956 1017300 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 22:21:49.795064 1017300 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 22:21:49.795120 1017300 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 22:21:49.795178 1017300 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 22:21:49.795257 1017300 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 22:21:49.795362 1017300 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 22:21:49.795462 1017300 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1120 22:21:49.795531 1017300 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 22:21:49.798700 1017300 out.go:252]   - Generating certificates and keys ...
	I1120 22:21:49.798804 1017300 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 22:21:49.798879 1017300 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 22:21:49.798955 1017300 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 22:21:49.799047 1017300 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 22:21:49.799117 1017300 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 22:21:49.799181 1017300 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 22:21:49.799243 1017300 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 22:21:49.799384 1017300 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-443192] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 22:21:49.799448 1017300 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 22:21:49.799581 1017300 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-443192] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 22:21:49.799656 1017300 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 22:21:49.799728 1017300 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 22:21:49.799778 1017300 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 22:21:49.799841 1017300 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 22:21:49.799898 1017300 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 22:21:49.799957 1017300 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 22:21:49.800037 1017300 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 22:21:49.800098 1017300 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 22:21:49.800188 1017300 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 22:21:49.800267 1017300 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 22:21:49.803367 1017300 out.go:252]   - Booting up control plane ...
	I1120 22:21:49.803528 1017300 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 22:21:49.803640 1017300 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 22:21:49.803724 1017300 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 22:21:49.803845 1017300 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 22:21:49.803949 1017300 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 22:21:49.803998 1017300 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 22:21:49.804179 1017300 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1120 22:21:49.804270 1017300 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.002639 seconds
	I1120 22:21:49.804393 1017300 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 22:21:49.804540 1017300 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 22:21:49.804610 1017300 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 22:21:49.804829 1017300 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-443192 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 22:21:49.804896 1017300 kubeadm.go:319] [bootstrap-token] Using token: hur2cd.mtt5gcqls6pj6237
	I1120 22:21:49.807728 1017300 out.go:252]   - Configuring RBAC rules ...
	I1120 22:21:49.807909 1017300 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 22:21:49.808063 1017300 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 22:21:49.808273 1017300 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 22:21:49.808477 1017300 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 22:21:49.808618 1017300 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 22:21:49.808739 1017300 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 22:21:49.808877 1017300 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 22:21:49.808931 1017300 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 22:21:49.808986 1017300 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 22:21:49.808995 1017300 kubeadm.go:319] 
	I1120 22:21:49.809063 1017300 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 22:21:49.809070 1017300 kubeadm.go:319] 
	I1120 22:21:49.809156 1017300 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 22:21:49.809165 1017300 kubeadm.go:319] 
	I1120 22:21:49.809193 1017300 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 22:21:49.809266 1017300 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 22:21:49.809326 1017300 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 22:21:49.809335 1017300 kubeadm.go:319] 
	I1120 22:21:49.809395 1017300 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 22:21:49.809404 1017300 kubeadm.go:319] 
	I1120 22:21:49.809457 1017300 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 22:21:49.809465 1017300 kubeadm.go:319] 
	I1120 22:21:49.809524 1017300 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 22:21:49.809612 1017300 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 22:21:49.809695 1017300 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 22:21:49.809704 1017300 kubeadm.go:319] 
	I1120 22:21:49.809799 1017300 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 22:21:49.809889 1017300 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 22:21:49.809899 1017300 kubeadm.go:319] 
	I1120 22:21:49.809993 1017300 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hur2cd.mtt5gcqls6pj6237 \
	I1120 22:21:49.810112 1017300 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 \
	I1120 22:21:49.810139 1017300 kubeadm.go:319] 	--control-plane 
	I1120 22:21:49.810148 1017300 kubeadm.go:319] 
	I1120 22:21:49.810244 1017300 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 22:21:49.810253 1017300 kubeadm.go:319] 
	I1120 22:21:49.810345 1017300 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hur2cd.mtt5gcqls6pj6237 \
	I1120 22:21:49.810485 1017300 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 
	I1120 22:21:49.810498 1017300 cni.go:84] Creating CNI manager for ""
	I1120 22:21:49.810505 1017300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:21:49.813627 1017300 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 22:21:49.816541 1017300 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 22:21:49.827015 1017300 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1120 22:21:49.827042 1017300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 22:21:49.851160 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 22:21:50.932082 1017300 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.080882615s)
	I1120 22:21:50.932124 1017300 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 22:21:50.932254 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:50.932325 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-443192 minikube.k8s.io/updated_at=2025_11_20T22_21_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=old-k8s-version-443192 minikube.k8s.io/primary=true
	I1120 22:21:51.107151 1017300 ops.go:34] apiserver oom_adj: -16
	I1120 22:21:51.107298 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:51.607795 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:52.108222 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:52.607603 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:53.108033 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:53.608104 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:54.107389 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:54.608229 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:55.108364 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:55.608206 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:56.107423 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:56.608080 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:57.107404 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:57.607796 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:58.108204 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:58.607426 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:59.107411 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:21:59.607675 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:22:00.107393 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:22:00.607648 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:22:01.108345 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:22:01.607643 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:22:02.107388 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:22:02.607634 1017300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:22:02.771095 1017300 kubeadm.go:1114] duration metric: took 11.838882987s to wait for elevateKubeSystemPrivileges
	I1120 22:22:02.771122 1017300 kubeadm.go:403] duration metric: took 28.268447624s to StartCluster
	I1120 22:22:02.771151 1017300 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:02.771210 1017300 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:22:02.772264 1017300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:02.772569 1017300 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:22:02.772650 1017300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 22:22:02.772887 1017300 config.go:182] Loaded profile config "old-k8s-version-443192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 22:22:02.772919 1017300 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:22:02.772997 1017300 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-443192"
	I1120 22:22:02.773018 1017300 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-443192"
	I1120 22:22:02.773011 1017300 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-443192"
	I1120 22:22:02.773039 1017300 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:22:02.773044 1017300 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-443192"
	I1120 22:22:02.773414 1017300 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:02.773808 1017300 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:02.775833 1017300 out.go:179] * Verifying Kubernetes components...
	I1120 22:22:02.778703 1017300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:22:02.820593 1017300 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-443192"
	I1120 22:22:02.820652 1017300 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:22:02.821194 1017300 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:02.826449 1017300 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:22:02.829998 1017300 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:22:02.830035 1017300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:22:02.830118 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:02.867251 1017300 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:22:02.867274 1017300 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:22:02.867347 1017300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:02.868305 1017300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:02.904743 1017300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:03.132587 1017300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:22:03.147114 1017300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:22:03.147404 1017300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 22:22:03.249251 1017300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:22:03.875326 1017300 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 22:22:03.877309 1017300 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-443192" to be "Ready" ...
	I1120 22:22:03.922158 1017300 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 22:22:03.925247 1017300 addons.go:515] duration metric: took 1.152305909s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 22:22:04.379786 1017300 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-443192" context rescaled to 1 replicas
	W1120 22:22:05.880642 1017300 node_ready.go:57] node "old-k8s-version-443192" has "Ready":"False" status (will retry)
	W1120 22:22:07.880738 1017300 node_ready.go:57] node "old-k8s-version-443192" has "Ready":"False" status (will retry)
	W1120 22:22:09.880884 1017300 node_ready.go:57] node "old-k8s-version-443192" has "Ready":"False" status (will retry)
	W1120 22:22:11.881077 1017300 node_ready.go:57] node "old-k8s-version-443192" has "Ready":"False" status (will retry)
	W1120 22:22:13.881227 1017300 node_ready.go:57] node "old-k8s-version-443192" has "Ready":"False" status (will retry)
	W1120 22:22:16.380646 1017300 node_ready.go:57] node "old-k8s-version-443192" has "Ready":"False" status (will retry)
	I1120 22:22:18.380969 1017300 node_ready.go:49] node "old-k8s-version-443192" is "Ready"
	I1120 22:22:18.380994 1017300 node_ready.go:38] duration metric: took 14.503664984s for node "old-k8s-version-443192" to be "Ready" ...
	I1120 22:22:18.381008 1017300 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:22:18.381073 1017300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:22:18.397732 1017300 api_server.go:72] duration metric: took 15.625129503s to wait for apiserver process to appear ...
	I1120 22:22:18.397755 1017300 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:22:18.397774 1017300 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:22:18.420188 1017300 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:22:18.421838 1017300 api_server.go:141] control plane version: v1.28.0
	I1120 22:22:18.421861 1017300 api_server.go:131] duration metric: took 24.099691ms to wait for apiserver health ...
	I1120 22:22:18.421871 1017300 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:22:18.428998 1017300 system_pods.go:59] 8 kube-system pods found
	I1120 22:22:18.429085 1017300 system_pods.go:61] "coredns-5dd5756b68-q7jgh" [b00478d4-df59-4e3b-9e06-d6dc59c4430f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:22:18.429109 1017300 system_pods.go:61] "etcd-old-k8s-version-443192" [c30065df-9ec7-453e-b779-96af2c2f8730] Running
	I1120 22:22:18.429158 1017300 system_pods.go:61] "kindnet-ch2km" [960a21f2-f0bc-4d3e-a058-91b7d45a0d7b] Running
	I1120 22:22:18.429184 1017300 system_pods.go:61] "kube-apiserver-old-k8s-version-443192" [b64a6e1f-7c43-4917-95a9-923853091074] Running
	I1120 22:22:18.429207 1017300 system_pods.go:61] "kube-controller-manager-old-k8s-version-443192" [4ba54de8-17f5-4a0d-b5a3-a8d0c8c5931a] Running
	I1120 22:22:18.429243 1017300 system_pods.go:61] "kube-proxy-srvjx" [46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0] Running
	I1120 22:22:18.429267 1017300 system_pods.go:61] "kube-scheduler-old-k8s-version-443192" [945b7ba2-b725-420b-b25e-eddc4e56bb75] Running
	I1120 22:22:18.429289 1017300 system_pods.go:61] "storage-provisioner" [8f6e35f9-c59f-4a38-b658-c7acf5d0df1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:22:18.429329 1017300 system_pods.go:74] duration metric: took 7.451912ms to wait for pod list to return data ...
	I1120 22:22:18.429341 1017300 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:22:18.433974 1017300 default_sa.go:45] found service account: "default"
	I1120 22:22:18.434050 1017300 default_sa.go:55] duration metric: took 4.702527ms for default service account to be created ...
	I1120 22:22:18.434077 1017300 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:22:18.441024 1017300 system_pods.go:86] 8 kube-system pods found
	I1120 22:22:18.441107 1017300 system_pods.go:89] "coredns-5dd5756b68-q7jgh" [b00478d4-df59-4e3b-9e06-d6dc59c4430f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:22:18.441128 1017300 system_pods.go:89] "etcd-old-k8s-version-443192" [c30065df-9ec7-453e-b779-96af2c2f8730] Running
	I1120 22:22:18.441152 1017300 system_pods.go:89] "kindnet-ch2km" [960a21f2-f0bc-4d3e-a058-91b7d45a0d7b] Running
	I1120 22:22:18.441190 1017300 system_pods.go:89] "kube-apiserver-old-k8s-version-443192" [b64a6e1f-7c43-4917-95a9-923853091074] Running
	I1120 22:22:18.441215 1017300 system_pods.go:89] "kube-controller-manager-old-k8s-version-443192" [4ba54de8-17f5-4a0d-b5a3-a8d0c8c5931a] Running
	I1120 22:22:18.441235 1017300 system_pods.go:89] "kube-proxy-srvjx" [46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0] Running
	I1120 22:22:18.441273 1017300 system_pods.go:89] "kube-scheduler-old-k8s-version-443192" [945b7ba2-b725-420b-b25e-eddc4e56bb75] Running
	I1120 22:22:18.441300 1017300 system_pods.go:89] "storage-provisioner" [8f6e35f9-c59f-4a38-b658-c7acf5d0df1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:22:18.441355 1017300 retry.go:31] will retry after 283.437592ms: missing components: kube-dns
	I1120 22:22:18.730088 1017300 system_pods.go:86] 8 kube-system pods found
	I1120 22:22:18.730122 1017300 system_pods.go:89] "coredns-5dd5756b68-q7jgh" [b00478d4-df59-4e3b-9e06-d6dc59c4430f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:22:18.730129 1017300 system_pods.go:89] "etcd-old-k8s-version-443192" [c30065df-9ec7-453e-b779-96af2c2f8730] Running
	I1120 22:22:18.730135 1017300 system_pods.go:89] "kindnet-ch2km" [960a21f2-f0bc-4d3e-a058-91b7d45a0d7b] Running
	I1120 22:22:18.730140 1017300 system_pods.go:89] "kube-apiserver-old-k8s-version-443192" [b64a6e1f-7c43-4917-95a9-923853091074] Running
	I1120 22:22:18.730168 1017300 system_pods.go:89] "kube-controller-manager-old-k8s-version-443192" [4ba54de8-17f5-4a0d-b5a3-a8d0c8c5931a] Running
	I1120 22:22:18.730178 1017300 system_pods.go:89] "kube-proxy-srvjx" [46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0] Running
	I1120 22:22:18.730185 1017300 system_pods.go:89] "kube-scheduler-old-k8s-version-443192" [945b7ba2-b725-420b-b25e-eddc4e56bb75] Running
	I1120 22:22:18.730192 1017300 system_pods.go:89] "storage-provisioner" [8f6e35f9-c59f-4a38-b658-c7acf5d0df1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:22:18.730210 1017300 retry.go:31] will retry after 353.615992ms: missing components: kube-dns
	I1120 22:22:19.088790 1017300 system_pods.go:86] 8 kube-system pods found
	I1120 22:22:19.088828 1017300 system_pods.go:89] "coredns-5dd5756b68-q7jgh" [b00478d4-df59-4e3b-9e06-d6dc59c4430f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:22:19.088836 1017300 system_pods.go:89] "etcd-old-k8s-version-443192" [c30065df-9ec7-453e-b779-96af2c2f8730] Running
	I1120 22:22:19.088842 1017300 system_pods.go:89] "kindnet-ch2km" [960a21f2-f0bc-4d3e-a058-91b7d45a0d7b] Running
	I1120 22:22:19.088846 1017300 system_pods.go:89] "kube-apiserver-old-k8s-version-443192" [b64a6e1f-7c43-4917-95a9-923853091074] Running
	I1120 22:22:19.088851 1017300 system_pods.go:89] "kube-controller-manager-old-k8s-version-443192" [4ba54de8-17f5-4a0d-b5a3-a8d0c8c5931a] Running
	I1120 22:22:19.088855 1017300 system_pods.go:89] "kube-proxy-srvjx" [46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0] Running
	I1120 22:22:19.088859 1017300 system_pods.go:89] "kube-scheduler-old-k8s-version-443192" [945b7ba2-b725-420b-b25e-eddc4e56bb75] Running
	I1120 22:22:19.088865 1017300 system_pods.go:89] "storage-provisioner" [8f6e35f9-c59f-4a38-b658-c7acf5d0df1b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:22:19.088879 1017300 retry.go:31] will retry after 295.396878ms: missing components: kube-dns
	I1120 22:22:19.392107 1017300 system_pods.go:86] 8 kube-system pods found
	I1120 22:22:19.392140 1017300 system_pods.go:89] "coredns-5dd5756b68-q7jgh" [b00478d4-df59-4e3b-9e06-d6dc59c4430f] Running
	I1120 22:22:19.392147 1017300 system_pods.go:89] "etcd-old-k8s-version-443192" [c30065df-9ec7-453e-b779-96af2c2f8730] Running
	I1120 22:22:19.392152 1017300 system_pods.go:89] "kindnet-ch2km" [960a21f2-f0bc-4d3e-a058-91b7d45a0d7b] Running
	I1120 22:22:19.392156 1017300 system_pods.go:89] "kube-apiserver-old-k8s-version-443192" [b64a6e1f-7c43-4917-95a9-923853091074] Running
	I1120 22:22:19.392161 1017300 system_pods.go:89] "kube-controller-manager-old-k8s-version-443192" [4ba54de8-17f5-4a0d-b5a3-a8d0c8c5931a] Running
	I1120 22:22:19.392165 1017300 system_pods.go:89] "kube-proxy-srvjx" [46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0] Running
	I1120 22:22:19.392169 1017300 system_pods.go:89] "kube-scheduler-old-k8s-version-443192" [945b7ba2-b725-420b-b25e-eddc4e56bb75] Running
	I1120 22:22:19.392173 1017300 system_pods.go:89] "storage-provisioner" [8f6e35f9-c59f-4a38-b658-c7acf5d0df1b] Running
	I1120 22:22:19.392180 1017300 system_pods.go:126] duration metric: took 958.083893ms to wait for k8s-apps to be running ...
	I1120 22:22:19.392195 1017300 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:22:19.392255 1017300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:22:19.407870 1017300 system_svc.go:56] duration metric: took 15.666647ms WaitForService to wait for kubelet
	I1120 22:22:19.407899 1017300 kubeadm.go:587] duration metric: took 16.635300852s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:22:19.407917 1017300 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:22:19.410715 1017300 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:22:19.410744 1017300 node_conditions.go:123] node cpu capacity is 2
	I1120 22:22:19.410759 1017300 node_conditions.go:105] duration metric: took 2.835901ms to run NodePressure ...
	I1120 22:22:19.410772 1017300 start.go:242] waiting for startup goroutines ...
	I1120 22:22:19.410780 1017300 start.go:247] waiting for cluster config update ...
	I1120 22:22:19.410790 1017300 start.go:256] writing updated cluster config ...
	I1120 22:22:19.411141 1017300 ssh_runner.go:195] Run: rm -f paused
	I1120 22:22:19.415454 1017300 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:22:19.420688 1017300 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-q7jgh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:19.426479 1017300 pod_ready.go:94] pod "coredns-5dd5756b68-q7jgh" is "Ready"
	I1120 22:22:19.426507 1017300 pod_ready.go:86] duration metric: took 5.791196ms for pod "coredns-5dd5756b68-q7jgh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:19.429870 1017300 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:19.435723 1017300 pod_ready.go:94] pod "etcd-old-k8s-version-443192" is "Ready"
	I1120 22:22:19.435751 1017300 pod_ready.go:86] duration metric: took 5.854869ms for pod "etcd-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:19.439899 1017300 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:19.445521 1017300 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-443192" is "Ready"
	I1120 22:22:19.445549 1017300 pod_ready.go:86] duration metric: took 5.623458ms for pod "kube-apiserver-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:19.449261 1017300 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:19.819687 1017300 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-443192" is "Ready"
	I1120 22:22:19.819713 1017300 pod_ready.go:86] duration metric: took 370.424851ms for pod "kube-controller-manager-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:20.021258 1017300 pod_ready.go:83] waiting for pod "kube-proxy-srvjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:20.419842 1017300 pod_ready.go:94] pod "kube-proxy-srvjx" is "Ready"
	I1120 22:22:20.419871 1017300 pod_ready.go:86] duration metric: took 398.581743ms for pod "kube-proxy-srvjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:20.620950 1017300 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:21.019882 1017300 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-443192" is "Ready"
	I1120 22:22:21.019909 1017300 pod_ready.go:86] duration metric: took 398.930235ms for pod "kube-scheduler-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:22:21.019922 1017300 pod_ready.go:40] duration metric: took 1.604423575s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:22:21.090846 1017300 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1120 22:22:21.094067 1017300 out.go:203] 
	W1120 22:22:21.096572 1017300 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1120 22:22:21.099553 1017300 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1120 22:22:21.103350 1017300 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-443192" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 22:22:18 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:18.463301771Z" level=info msg="Created container d970a55f1dffc6a172bd8aace0a110bacfc34e4e680ac44795a3f4c7ee3ea0ff: kube-system/coredns-5dd5756b68-q7jgh/coredns" id=724a92a8-8e05-4c93-b359-9ea502903c17 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:22:18 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:18.464654231Z" level=info msg="Starting container: d970a55f1dffc6a172bd8aace0a110bacfc34e4e680ac44795a3f4c7ee3ea0ff" id=dd96a928-16f7-421c-ad0b-6be0e8996bcd name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:22:18 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:18.469316659Z" level=info msg="Started container" PID=1952 containerID=d970a55f1dffc6a172bd8aace0a110bacfc34e4e680ac44795a3f4c7ee3ea0ff description=kube-system/coredns-5dd5756b68-q7jgh/coredns id=dd96a928-16f7-421c-ad0b-6be0e8996bcd name=/runtime.v1.RuntimeService/StartContainer sandboxID=d08e3297b5589a72d25732510c1fabfd6a69ec36721c69a7587ca6f2ffdf616f
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.607158694Z" level=info msg="Running pod sandbox: default/busybox/POD" id=81202509-b847-42f8-8f90-b91359a4843b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.607234896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.612575503Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0f59c18200c3234c5a0ab75214e725805584df9885332c49ce6657a5b907e397 UID:930e84cf-8f5d-4107-bdf0-ee99b259637f NetNS:/var/run/netns/29269fea-3270-4622-bb86-137e292f0e14 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b158}] Aliases:map[]}"
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.612611515Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.622874089Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0f59c18200c3234c5a0ab75214e725805584df9885332c49ce6657a5b907e397 UID:930e84cf-8f5d-4107-bdf0-ee99b259637f NetNS:/var/run/netns/29269fea-3270-4622-bb86-137e292f0e14 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b158}] Aliases:map[]}"
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.623305413Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.628762599Z" level=info msg="Ran pod sandbox 0f59c18200c3234c5a0ab75214e725805584df9885332c49ce6657a5b907e397 with infra container: default/busybox/POD" id=81202509-b847-42f8-8f90-b91359a4843b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.629838387Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=02c7569f-2dd6-4961-8a82-d47f068fbf13 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.630040105Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=02c7569f-2dd6-4961-8a82-d47f068fbf13 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.630088171Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=02c7569f-2dd6-4961-8a82-d47f068fbf13 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.630594918Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0cc9c6c3-b61c-4b98-9cbc-dba9b7d1c13c name=/runtime.v1.ImageService/PullImage
	Nov 20 22:22:21 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:21.633588467Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 22:22:23 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:23.705693612Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0cc9c6c3-b61c-4b98-9cbc-dba9b7d1c13c name=/runtime.v1.ImageService/PullImage
	Nov 20 22:22:23 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:23.706965489Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=148b30e6-2d45-427f-9680-d4a3f2ecae20 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:22:23 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:23.708515006Z" level=info msg="Creating container: default/busybox/busybox" id=c0f3b975-963c-4243-bd74-ba5990b0f77c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:22:23 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:23.708612246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:22:23 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:23.71377461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:22:23 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:23.714476904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:22:23 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:23.737334659Z" level=info msg="Created container 6de19c541c131aa10886151ac22b2741e2276adfd601becd142d5ad715fbec0e: default/busybox/busybox" id=c0f3b975-963c-4243-bd74-ba5990b0f77c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:22:23 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:23.738362554Z" level=info msg="Starting container: 6de19c541c131aa10886151ac22b2741e2276adfd601becd142d5ad715fbec0e" id=67383232-b10d-4cc0-9c76-889ba10533df name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:22:23 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:23.740191689Z" level=info msg="Started container" PID=2003 containerID=6de19c541c131aa10886151ac22b2741e2276adfd601becd142d5ad715fbec0e description=default/busybox/busybox id=67383232-b10d-4cc0-9c76-889ba10533df name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f59c18200c3234c5a0ab75214e725805584df9885332c49ce6657a5b907e397
	Nov 20 22:22:29 old-k8s-version-443192 crio[837]: time="2025-11-20T22:22:29.487376195Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	6de19c541c131       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   0f59c18200c32       busybox                                          default
	d970a55f1dffc       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   d08e3297b5589       coredns-5dd5756b68-q7jgh                         kube-system
	49a4bc72fb98c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   9c5e4a74e4b07       storage-provisioner                              kube-system
	a19ed57814857       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   f01e1cbd0aae3       kindnet-ch2km                                    kube-system
	291197e6a4aaf       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   013ea61a7292a       kube-proxy-srvjx                                 kube-system
	a0000097eaae7       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   8cace623b6259       kube-controller-manager-old-k8s-version-443192   kube-system
	a9a1e1f8bedca       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   1abd9e0aee776       kube-scheduler-old-k8s-version-443192            kube-system
	e7aead0c61922       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   f831bad857b6d       etcd-old-k8s-version-443192                      kube-system
	72e7f8ead9a57       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   013bcc9615bef       kube-apiserver-old-k8s-version-443192            kube-system
	
	
	==> coredns [d970a55f1dffc6a172bd8aace0a110bacfc34e4e680ac44795a3f4c7ee3ea0ff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50808 - 46317 "HINFO IN 6889208880280945874.1418236870632652047. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034349258s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-443192
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-443192
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-443192
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_21_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-443192
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:22:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:22:21 +0000   Thu, 20 Nov 2025 22:21:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:22:21 +0000   Thu, 20 Nov 2025 22:21:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:22:21 +0000   Thu, 20 Nov 2025 22:21:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:22:21 +0000   Thu, 20 Nov 2025 22:22:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-443192
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                25366f85-c45a-4699-899a-6aa1d4483da7
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-q7jgh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-443192                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-ch2km                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-443192             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-443192    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-srvjx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-443192             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 42s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s   kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s   kubelet          Node old-k8s-version-443192 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s   kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-443192 event: Registered Node old-k8s-version-443192 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-443192 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 21:52] overlayfs: idmapped layers are currently not supported
	[Nov20 21:54] overlayfs: idmapped layers are currently not supported
	[Nov20 21:59] overlayfs: idmapped layers are currently not supported
	[Nov20 22:00] overlayfs: idmapped layers are currently not supported
	[Nov20 22:01] overlayfs: idmapped layers are currently not supported
	[Nov20 22:02] overlayfs: idmapped layers are currently not supported
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e7aead0c6192216788a0d7ac311aa5b5adc51e42e62949fe517e244706e970f5] <==
	{"level":"info","ts":"2025-11-20T22:21:43.291396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-20T22:21:43.291539Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-20T22:21:43.295313Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T22:21:43.295408Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T22:21:43.299169Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T22:21:43.303229Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T22:21:43.303376Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T22:21:44.072277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-20T22:21:44.072394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-20T22:21:44.072447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-20T22:21:44.072486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-20T22:21:44.07252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T22:21:44.07256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-20T22:21:44.072595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T22:21:44.075137Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T22:21:44.083184Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-443192 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T22:21:44.083363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T22:21:44.084391Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T22:21:44.084525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T22:21:44.085423Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-20T22:21:44.105662Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T22:21:44.105765Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-20T22:21:44.11787Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T22:21:44.118039Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T22:21:44.118866Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 22:22:31 up  5:04,  0 user,  load average: 2.76, 3.43, 2.55
	Linux old-k8s-version-443192 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a19ed5781485708b8e23c3537cf64cd05767c953216e9df484b4ac821411974e] <==
	I1120 22:22:07.217494       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:22:07.303222       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:22:07.303447       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:22:07.303467       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:22:07.303482       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:22:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:22:07.504563       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:22:07.504916       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:22:07.504994       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:22:07.506421       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 22:22:07.705449       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:22:07.705537       1 metrics.go:72] Registering metrics
	I1120 22:22:07.705614       1 controller.go:711] "Syncing nftables rules"
	I1120 22:22:17.508334       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:22:17.508451       1 main.go:301] handling current node
	I1120 22:22:27.504256       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:22:27.504290       1 main.go:301] handling current node
	
	
	==> kube-apiserver [72e7f8ead9a57c454722d8ac30edeb2fe934dd504bd93f1f1d14d8cb2c07c4b8] <==
	I1120 22:21:46.659704       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 22:21:46.660671       1 aggregator.go:166] initial CRD sync complete...
	I1120 22:21:46.661211       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 22:21:46.661263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:21:46.661312       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:21:46.672816       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 22:21:46.679614       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1120 22:21:46.679642       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1120 22:21:46.688327       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1120 22:21:46.891571       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:21:47.364235       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 22:21:47.368983       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 22:21:47.369007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:21:48.060384       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:21:48.113547       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:21:48.239796       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 22:21:48.248413       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 22:21:48.249747       1 controller.go:624] quota admission added evaluator for: endpoints
	I1120 22:21:48.257081       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:21:48.614022       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 22:21:49.698354       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 22:21:49.711248       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 22:21:49.724976       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1120 22:22:01.961501       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1120 22:22:02.620600       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a0000097eaae7954f350ef71758fe6b3eb56ed279ff5d840c635be165d1b3a5b] <==
	I1120 22:22:01.812029       1 shared_informer.go:318] Caches are synced for attach detach
	I1120 22:22:01.812514       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1120 22:22:01.824027       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 22:22:01.914900       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 22:22:01.968906       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1120 22:22:02.221861       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 22:22:02.259432       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 22:22:02.259466       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 22:22:02.641862       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-srvjx"
	I1120 22:22:02.651099       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ch2km"
	I1120 22:22:02.759973       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-q7jgh"
	I1120 22:22:02.829677       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-phrf6"
	I1120 22:22:02.916838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="948.250209ms"
	I1120 22:22:02.953353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="36.364801ms"
	I1120 22:22:02.953463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.807µs"
	I1120 22:22:03.933483       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1120 22:22:03.962289       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-phrf6"
	I1120 22:22:03.975937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.734393ms"
	I1120 22:22:04.003273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.175045ms"
	I1120 22:22:04.003513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.447µs"
	I1120 22:22:18.043264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.863µs"
	I1120 22:22:18.092583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.448µs"
	I1120 22:22:19.215736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.670093ms"
	I1120 22:22:19.216749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.615µs"
	I1120 22:22:21.759982       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [291197e6a4aafa9c8f9f5860265c19f79f47b45160b75e0792e16095ab950465] <==
	I1120 22:22:04.561400       1 server_others.go:69] "Using iptables proxy"
	I1120 22:22:04.576504       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1120 22:22:04.598948       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:22:04.601189       1 server_others.go:152] "Using iptables Proxier"
	I1120 22:22:04.601227       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 22:22:04.601234       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 22:22:04.601269       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 22:22:04.601475       1 server.go:846] "Version info" version="v1.28.0"
	I1120 22:22:04.601489       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:22:04.602543       1 config.go:188] "Starting service config controller"
	I1120 22:22:04.602579       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 22:22:04.602601       1 config.go:97] "Starting endpoint slice config controller"
	I1120 22:22:04.602605       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 22:22:04.605721       1 config.go:315] "Starting node config controller"
	I1120 22:22:04.605805       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 22:22:04.702716       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1120 22:22:04.702788       1 shared_informer.go:318] Caches are synced for service config
	I1120 22:22:04.706067       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a9a1e1f8bedcaaf1a391149974d84677fb79447d5bb42d76384d395c0ed86538] <==
	E1120 22:21:46.635063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1120 22:21:46.635030       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1120 22:21:46.634944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1120 22:21:46.635102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1120 22:21:46.635318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1120 22:21:46.635368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1120 22:21:46.635460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1120 22:21:46.635509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1120 22:21:47.475545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1120 22:21:47.475584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1120 22:21:47.489699       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1120 22:21:47.489796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1120 22:21:47.557094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1120 22:21:47.557131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1120 22:21:47.574010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1120 22:21:47.574046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1120 22:21:47.670856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1120 22:21:47.670900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1120 22:21:47.712850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1120 22:21:47.712966       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1120 22:21:47.748391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1120 22:21:47.748500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1120 22:21:47.784291       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1120 22:21:47.784328       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1120 22:21:49.722589       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 22:22:02 old-k8s-version-443192 kubelet[1380]: I1120 22:22:02.719007    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/960a21f2-f0bc-4d3e-a058-91b7d45a0d7b-xtables-lock\") pod \"kindnet-ch2km\" (UID: \"960a21f2-f0bc-4d3e-a058-91b7d45a0d7b\") " pod="kube-system/kindnet-ch2km"
	Nov 20 22:22:02 old-k8s-version-443192 kubelet[1380]: I1120 22:22:02.719035    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0-xtables-lock\") pod \"kube-proxy-srvjx\" (UID: \"46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0\") " pod="kube-system/kube-proxy-srvjx"
	Nov 20 22:22:02 old-k8s-version-443192 kubelet[1380]: I1120 22:22:02.719059    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0-lib-modules\") pod \"kube-proxy-srvjx\" (UID: \"46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0\") " pod="kube-system/kube-proxy-srvjx"
	Nov 20 22:22:02 old-k8s-version-443192 kubelet[1380]: I1120 22:22:02.719082    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnfc\" (UniqueName: \"kubernetes.io/projected/960a21f2-f0bc-4d3e-a058-91b7d45a0d7b-kube-api-access-2bnfc\") pod \"kindnet-ch2km\" (UID: \"960a21f2-f0bc-4d3e-a058-91b7d45a0d7b\") " pod="kube-system/kindnet-ch2km"
	Nov 20 22:22:03 old-k8s-version-443192 kubelet[1380]: E1120 22:22:03.922770    1380 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 20 22:22:03 old-k8s-version-443192 kubelet[1380]: E1120 22:22:03.922819    1380 projected.go:198] Error preparing data for projected volume kube-api-access-5lp4d for pod kube-system/kube-proxy-srvjx: failed to sync configmap cache: timed out waiting for the condition
	Nov 20 22:22:03 old-k8s-version-443192 kubelet[1380]: E1120 22:22:03.922909    1380 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0-kube-api-access-5lp4d podName:46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0 nodeName:}" failed. No retries permitted until 2025-11-20 22:22:04.422884354 +0000 UTC m=+14.766356528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5lp4d" (UniqueName: "kubernetes.io/projected/46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0-kube-api-access-5lp4d") pod "kube-proxy-srvjx" (UID: "46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0") : failed to sync configmap cache: timed out waiting for the condition
	Nov 20 22:22:03 old-k8s-version-443192 kubelet[1380]: E1120 22:22:03.935531    1380 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 20 22:22:03 old-k8s-version-443192 kubelet[1380]: E1120 22:22:03.935728    1380 projected.go:198] Error preparing data for projected volume kube-api-access-2bnfc for pod kube-system/kindnet-ch2km: failed to sync configmap cache: timed out waiting for the condition
	Nov 20 22:22:03 old-k8s-version-443192 kubelet[1380]: E1120 22:22:03.935867    1380 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/960a21f2-f0bc-4d3e-a058-91b7d45a0d7b-kube-api-access-2bnfc podName:960a21f2-f0bc-4d3e-a058-91b7d45a0d7b nodeName:}" failed. No retries permitted until 2025-11-20 22:22:04.435845202 +0000 UTC m=+14.779317376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2bnfc" (UniqueName: "kubernetes.io/projected/960a21f2-f0bc-4d3e-a058-91b7d45a0d7b-kube-api-access-2bnfc") pod "kindnet-ch2km" (UID: "960a21f2-f0bc-4d3e-a058-91b7d45a0d7b") : failed to sync configmap cache: timed out waiting for the condition
	Nov 20 22:22:08 old-k8s-version-443192 kubelet[1380]: I1120 22:22:08.164386    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-srvjx" podStartSLOduration=6.164328144 podCreationTimestamp="2025-11-20 22:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:22:05.153948382 +0000 UTC m=+15.497420555" watchObservedRunningTime="2025-11-20 22:22:08.164328144 +0000 UTC m=+18.507800318"
	Nov 20 22:22:09 old-k8s-version-443192 kubelet[1380]: I1120 22:22:09.945672    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-ch2km" podStartSLOduration=5.5937740510000005 podCreationTimestamp="2025-11-20 22:22:02 +0000 UTC" firstStartedPulling="2025-11-20 22:22:04.806849599 +0000 UTC m=+15.150321773" lastFinishedPulling="2025-11-20 22:22:07.158696241 +0000 UTC m=+17.502168415" observedRunningTime="2025-11-20 22:22:08.164635225 +0000 UTC m=+18.508107399" watchObservedRunningTime="2025-11-20 22:22:09.945620693 +0000 UTC m=+20.289092867"
	Nov 20 22:22:17 old-k8s-version-443192 kubelet[1380]: I1120 22:22:17.962025    1380 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 20 22:22:18 old-k8s-version-443192 kubelet[1380]: I1120 22:22:18.037257    1380 topology_manager.go:215] "Topology Admit Handler" podUID="b00478d4-df59-4e3b-9e06-d6dc59c4430f" podNamespace="kube-system" podName="coredns-5dd5756b68-q7jgh"
	Nov 20 22:22:18 old-k8s-version-443192 kubelet[1380]: I1120 22:22:18.058855    1380 topology_manager.go:215] "Topology Admit Handler" podUID="8f6e35f9-c59f-4a38-b658-c7acf5d0df1b" podNamespace="kube-system" podName="storage-provisioner"
	Nov 20 22:22:18 old-k8s-version-443192 kubelet[1380]: I1120 22:22:18.124243    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7pg9\" (UniqueName: \"kubernetes.io/projected/8f6e35f9-c59f-4a38-b658-c7acf5d0df1b-kube-api-access-f7pg9\") pod \"storage-provisioner\" (UID: \"8f6e35f9-c59f-4a38-b658-c7acf5d0df1b\") " pod="kube-system/storage-provisioner"
	Nov 20 22:22:18 old-k8s-version-443192 kubelet[1380]: I1120 22:22:18.124304    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00478d4-df59-4e3b-9e06-d6dc59c4430f-config-volume\") pod \"coredns-5dd5756b68-q7jgh\" (UID: \"b00478d4-df59-4e3b-9e06-d6dc59c4430f\") " pod="kube-system/coredns-5dd5756b68-q7jgh"
	Nov 20 22:22:18 old-k8s-version-443192 kubelet[1380]: I1120 22:22:18.124329    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gfnb\" (UniqueName: \"kubernetes.io/projected/b00478d4-df59-4e3b-9e06-d6dc59c4430f-kube-api-access-2gfnb\") pod \"coredns-5dd5756b68-q7jgh\" (UID: \"b00478d4-df59-4e3b-9e06-d6dc59c4430f\") " pod="kube-system/coredns-5dd5756b68-q7jgh"
	Nov 20 22:22:18 old-k8s-version-443192 kubelet[1380]: I1120 22:22:18.124358    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8f6e35f9-c59f-4a38-b658-c7acf5d0df1b-tmp\") pod \"storage-provisioner\" (UID: \"8f6e35f9-c59f-4a38-b658-c7acf5d0df1b\") " pod="kube-system/storage-provisioner"
	Nov 20 22:22:18 old-k8s-version-443192 kubelet[1380]: W1120 22:22:18.369357    1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/crio-9c5e4a74e4b07212975c4d565450f3a880d89db11f04bd7a13bab9c8c3af3f4b WatchSource:0}: Error finding container 9c5e4a74e4b07212975c4d565450f3a880d89db11f04bd7a13bab9c8c3af3f4b: Status 404 returned error can't find the container with id 9c5e4a74e4b07212975c4d565450f3a880d89db11f04bd7a13bab9c8c3af3f4b
	Nov 20 22:22:19 old-k8s-version-443192 kubelet[1380]: I1120 22:22:19.197876    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.197831021 podCreationTimestamp="2025-11-20 22:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:22:19.183929156 +0000 UTC m=+29.527401330" watchObservedRunningTime="2025-11-20 22:22:19.197831021 +0000 UTC m=+29.541303203"
	Nov 20 22:22:21 old-k8s-version-443192 kubelet[1380]: I1120 22:22:21.304635    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q7jgh" podStartSLOduration=19.304582373 podCreationTimestamp="2025-11-20 22:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:22:19.198698282 +0000 UTC m=+29.542170455" watchObservedRunningTime="2025-11-20 22:22:21.304582373 +0000 UTC m=+31.648054621"
	Nov 20 22:22:21 old-k8s-version-443192 kubelet[1380]: I1120 22:22:21.305206    1380 topology_manager.go:215] "Topology Admit Handler" podUID="930e84cf-8f5d-4107-bdf0-ee99b259637f" podNamespace="default" podName="busybox"
	Nov 20 22:22:21 old-k8s-version-443192 kubelet[1380]: I1120 22:22:21.442381    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpskm\" (UniqueName: \"kubernetes.io/projected/930e84cf-8f5d-4107-bdf0-ee99b259637f-kube-api-access-wpskm\") pod \"busybox\" (UID: \"930e84cf-8f5d-4107-bdf0-ee99b259637f\") " pod="default/busybox"
	Nov 20 22:22:21 old-k8s-version-443192 kubelet[1380]: W1120 22:22:21.625405    1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/crio-0f59c18200c3234c5a0ab75214e725805584df9885332c49ce6657a5b907e397 WatchSource:0}: Error finding container 0f59c18200c3234c5a0ab75214e725805584df9885332c49ce6657a5b907e397: Status 404 returned error can't find the container with id 0f59c18200c3234c5a0ab75214e725805584df9885332c49ce6657a5b907e397
	
	
	==> storage-provisioner [49a4bc72fb98cf5a05c254d3ecba3f3c38aba9186f5d8cd7d69310360d0e92e4] <==
	I1120 22:22:18.452199       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 22:22:18.482204       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:22:18.482269       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 22:22:18.496881       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:22:18.497063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-443192_833a762a-a594-43eb-b576-f7ad4c6cf0fa!
	I1120 22:22:18.498062       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc0d96b6-2eea-47ae-a652-17e46e27b3bc", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-443192_833a762a-a594-43eb-b576-f7ad4c6cf0fa became leader
	I1120 22:22:18.597529       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-443192_833a762a-a594-43eb-b576-f7ad4c6cf0fa!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-443192 -n old-k8s-version-443192
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-443192 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-443192 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-443192 --alsologtostderr -v=1: exit status 80 (2.015331299s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-443192 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 22:23:43.189731 1023104 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:23:43.189941 1023104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:23:43.189985 1023104 out.go:374] Setting ErrFile to fd 2...
	I1120 22:23:43.190005 1023104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:23:43.190360 1023104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:23:43.190648 1023104 out.go:368] Setting JSON to false
	I1120 22:23:43.190703 1023104 mustload.go:66] Loading cluster: old-k8s-version-443192
	I1120 22:23:43.191177 1023104 config.go:182] Loaded profile config "old-k8s-version-443192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 22:23:43.191710 1023104 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:23:43.210261 1023104 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:23:43.210590 1023104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:23:43.277100 1023104 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 22:23:43.266741744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:23:43.277835 1023104 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-443192 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 22:23:43.281558 1023104 out.go:179] * Pausing node old-k8s-version-443192 ... 
	I1120 22:23:43.285211 1023104 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:23:43.285587 1023104 ssh_runner.go:195] Run: systemctl --version
	I1120 22:23:43.285638 1023104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:23:43.303269 1023104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:23:43.407636 1023104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:23:43.421392 1023104 pause.go:52] kubelet running: true
	I1120 22:23:43.421475 1023104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:23:43.686743 1023104 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:23:43.686845 1023104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:23:43.763111 1023104 cri.go:89] found id: "a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36"
	I1120 22:23:43.763141 1023104 cri.go:89] found id: "68468b5e3bffbe45e05a07c014a98788897a7948c744fc6aa4b3b47a96e34963"
	I1120 22:23:43.763147 1023104 cri.go:89] found id: "babee24b5f037d1430cee3e96ac245ea580fe5c334e85189c98eda6e2c23ee2f"
	I1120 22:23:43.763151 1023104 cri.go:89] found id: "1576991fff11eb3845a8a4cb002efe82a207403e30e19f8f6299ed0c313b4ac8"
	I1120 22:23:43.763155 1023104 cri.go:89] found id: "9985fcead7c1c65a99bb4a4836cdf63884e4e8a07114be23b3c00a042c12d29e"
	I1120 22:23:43.763161 1023104 cri.go:89] found id: "08baca71437157118a7d970659bacffc613ba230c7a81cfca8a55f5bef63bb1d"
	I1120 22:23:43.763196 1023104 cri.go:89] found id: "d30d232b1913bbcbf830559cf3873ada098fe3c7afcd389ba988f881f71008b4"
	I1120 22:23:43.763201 1023104 cri.go:89] found id: "9dcca088872de456ae574afdbd29f48077afe4c8f371c0f6fa7c77bceae2bfc9"
	I1120 22:23:43.763204 1023104 cri.go:89] found id: "0eb106aae6e3d943cefbdd723b0bbb278166cfebbd506495a02bbd34185a3502"
	I1120 22:23:43.763214 1023104 cri.go:89] found id: "9741b34fa9e85d148668cddd6abf917c4a6913a3797e2d161bad72d3fe8eb477"
	I1120 22:23:43.763222 1023104 cri.go:89] found id: "09cd542a1b6789791215f9991090113aa07db1c1dd6155ecc1a82452ba0a9b66"
	I1120 22:23:43.763225 1023104 cri.go:89] found id: ""
	I1120 22:23:43.763307 1023104 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:23:43.775035 1023104 retry.go:31] will retry after 213.489339ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:23:43Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:23:43.989557 1023104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:23:44.008191 1023104 pause.go:52] kubelet running: false
	I1120 22:23:44.008274 1023104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:23:44.205440 1023104 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:23:44.205526 1023104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:23:44.279650 1023104 cri.go:89] found id: "a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36"
	I1120 22:23:44.279676 1023104 cri.go:89] found id: "68468b5e3bffbe45e05a07c014a98788897a7948c744fc6aa4b3b47a96e34963"
	I1120 22:23:44.279682 1023104 cri.go:89] found id: "babee24b5f037d1430cee3e96ac245ea580fe5c334e85189c98eda6e2c23ee2f"
	I1120 22:23:44.279686 1023104 cri.go:89] found id: "1576991fff11eb3845a8a4cb002efe82a207403e30e19f8f6299ed0c313b4ac8"
	I1120 22:23:44.279689 1023104 cri.go:89] found id: "9985fcead7c1c65a99bb4a4836cdf63884e4e8a07114be23b3c00a042c12d29e"
	I1120 22:23:44.279693 1023104 cri.go:89] found id: "08baca71437157118a7d970659bacffc613ba230c7a81cfca8a55f5bef63bb1d"
	I1120 22:23:44.279696 1023104 cri.go:89] found id: "d30d232b1913bbcbf830559cf3873ada098fe3c7afcd389ba988f881f71008b4"
	I1120 22:23:44.279700 1023104 cri.go:89] found id: "9dcca088872de456ae574afdbd29f48077afe4c8f371c0f6fa7c77bceae2bfc9"
	I1120 22:23:44.279703 1023104 cri.go:89] found id: "0eb106aae6e3d943cefbdd723b0bbb278166cfebbd506495a02bbd34185a3502"
	I1120 22:23:44.279711 1023104 cri.go:89] found id: "9741b34fa9e85d148668cddd6abf917c4a6913a3797e2d161bad72d3fe8eb477"
	I1120 22:23:44.279714 1023104 cri.go:89] found id: "09cd542a1b6789791215f9991090113aa07db1c1dd6155ecc1a82452ba0a9b66"
	I1120 22:23:44.279717 1023104 cri.go:89] found id: ""
	I1120 22:23:44.279783 1023104 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:23:44.291306 1023104 retry.go:31] will retry after 441.890079ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:23:44Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:23:44.733587 1023104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:23:44.750006 1023104 pause.go:52] kubelet running: false
	I1120 22:23:44.750072 1023104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:23:44.945415 1023104 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:23:44.945496 1023104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:23:45.087139 1023104 cri.go:89] found id: "a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36"
	I1120 22:23:45.087171 1023104 cri.go:89] found id: "68468b5e3bffbe45e05a07c014a98788897a7948c744fc6aa4b3b47a96e34963"
	I1120 22:23:45.087177 1023104 cri.go:89] found id: "babee24b5f037d1430cee3e96ac245ea580fe5c334e85189c98eda6e2c23ee2f"
	I1120 22:23:45.087181 1023104 cri.go:89] found id: "1576991fff11eb3845a8a4cb002efe82a207403e30e19f8f6299ed0c313b4ac8"
	I1120 22:23:45.087185 1023104 cri.go:89] found id: "9985fcead7c1c65a99bb4a4836cdf63884e4e8a07114be23b3c00a042c12d29e"
	I1120 22:23:45.087189 1023104 cri.go:89] found id: "08baca71437157118a7d970659bacffc613ba230c7a81cfca8a55f5bef63bb1d"
	I1120 22:23:45.087247 1023104 cri.go:89] found id: "d30d232b1913bbcbf830559cf3873ada098fe3c7afcd389ba988f881f71008b4"
	I1120 22:23:45.087254 1023104 cri.go:89] found id: "9dcca088872de456ae574afdbd29f48077afe4c8f371c0f6fa7c77bceae2bfc9"
	I1120 22:23:45.087258 1023104 cri.go:89] found id: "0eb106aae6e3d943cefbdd723b0bbb278166cfebbd506495a02bbd34185a3502"
	I1120 22:23:45.087265 1023104 cri.go:89] found id: "9741b34fa9e85d148668cddd6abf917c4a6913a3797e2d161bad72d3fe8eb477"
	I1120 22:23:45.087269 1023104 cri.go:89] found id: "09cd542a1b6789791215f9991090113aa07db1c1dd6155ecc1a82452ba0a9b66"
	I1120 22:23:45.087272 1023104 cri.go:89] found id: ""
	I1120 22:23:45.087347 1023104 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:23:45.119104 1023104 out.go:203] 
	W1120 22:23:45.122543 1023104 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:23:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:23:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 22:23:45.122872 1023104 out.go:285] * 
	* 
	W1120 22:23:45.132716 1023104 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 22:23:45.143064 1023104 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-443192 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-443192
helpers_test.go:243: (dbg) docker inspect old-k8s-version-443192:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753",
	        "Created": "2025-11-20T22:21:23.635114568Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021017,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:22:44.682781717Z",
	            "FinishedAt": "2025-11-20T22:22:43.859460237Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/hostname",
	        "HostsPath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/hosts",
	        "LogPath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753-json.log",
	        "Name": "/old-k8s-version-443192",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-443192:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-443192",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753",
	                "LowerDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-443192",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-443192/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-443192",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-443192",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-443192",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2431d2fea12360a68455810c35eb44b387373c8b6c0b2224b02c1abd7057ffb7",
	            "SandboxKey": "/var/run/docker/netns/2431d2fea123",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34164"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-443192": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:78:52:57:12:9e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be8765199279f8eee237afe7c8b9f46458c0018ce58bf28750fa9832048503b9",
	                    "EndpointID": "cb9a2eee9a93fbb4be060164629245e9b7812d0e1bd3544ee7e2867f0eb3254c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-443192",
	                        "947acc53b1a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-443192 -n old-k8s-version-443192
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-443192 -n old-k8s-version-443192: exit status 2 (412.282485ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-443192 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-443192 logs -n 25: (1.398836068s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-640880 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo containerd config dump                                                                                                                                                                                                  │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo crio config                                                                                                                                                                                                             │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ delete  │ -p cilium-640880                                                                                                                                                                                                                              │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │ 20 Nov 25 22:19 UTC │
	│ start   │ -p force-systemd-env-833370 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-833370  │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │ 20 Nov 25 22:20 UTC │
	│ delete  │ -p kubernetes-upgrade-410652                                                                                                                                                                                                                  │ kubernetes-upgrade-410652 │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420078    │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ delete  │ -p force-systemd-env-833370                                                                                                                                                                                                                   │ force-systemd-env-833370  │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-options-961311 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ cert-options-961311 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ -p cert-options-961311 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ delete  │ -p cert-options-961311                                                                                                                                                                                                                        │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │                     │
	│ stop    │ -p old-k8s-version-443192 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-443192 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:23 UTC │
	│ image   │ old-k8s-version-443192 image list --format=json                                                                                                                                                                                               │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ pause   │ -p old-k8s-version-443192 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:22:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:22:44.394281 1020891 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:22:44.394475 1020891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:22:44.394505 1020891 out.go:374] Setting ErrFile to fd 2...
	I1120 22:22:44.394529 1020891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:22:44.394790 1020891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:22:44.395317 1020891 out.go:368] Setting JSON to false
	I1120 22:22:44.396309 1020891 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18290,"bootTime":1763659075,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:22:44.396413 1020891 start.go:143] virtualization:  
	I1120 22:22:44.399597 1020891 out.go:179] * [old-k8s-version-443192] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:22:44.403368 1020891 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:22:44.403454 1020891 notify.go:221] Checking for updates...
	I1120 22:22:44.409213 1020891 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:22:44.412128 1020891 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:22:44.415088 1020891 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:22:44.417906 1020891 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:22:44.420684 1020891 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:22:44.424258 1020891 config.go:182] Loaded profile config "old-k8s-version-443192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 22:22:44.427907 1020891 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1120 22:22:44.430789 1020891 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:22:44.469086 1020891 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:22:44.469216 1020891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:22:44.527632 1020891 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:22:44.517753918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:22:44.527750 1020891 docker.go:319] overlay module found
	I1120 22:22:44.531086 1020891 out.go:179] * Using the docker driver based on existing profile
	I1120 22:22:44.533983 1020891 start.go:309] selected driver: docker
	I1120 22:22:44.534004 1020891 start.go:930] validating driver "docker" against &{Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:22:44.534103 1020891 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:22:44.534838 1020891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:22:44.590107 1020891 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:22:44.58041212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:22:44.590440 1020891 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:22:44.590477 1020891 cni.go:84] Creating CNI manager for ""
	I1120 22:22:44.590538 1020891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:22:44.590582 1020891 start.go:353] cluster config:
	{Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:22:44.593891 1020891 out.go:179] * Starting "old-k8s-version-443192" primary control-plane node in "old-k8s-version-443192" cluster
	I1120 22:22:44.596732 1020891 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:22:44.599695 1020891 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:22:44.602610 1020891 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 22:22:44.602665 1020891 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1120 22:22:44.602678 1020891 cache.go:65] Caching tarball of preloaded images
	I1120 22:22:44.602680 1020891 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:22:44.602769 1020891 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:22:44.602797 1020891 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1120 22:22:44.602914 1020891 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/config.json ...
	I1120 22:22:44.623496 1020891 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:22:44.623520 1020891 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:22:44.623534 1020891 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:22:44.623558 1020891 start.go:360] acquireMachinesLock for old-k8s-version-443192: {Name:mk170647942fc2bf46e44d6cf36b5ae812935bb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:22:44.623618 1020891 start.go:364] duration metric: took 37.153µs to acquireMachinesLock for "old-k8s-version-443192"
	I1120 22:22:44.623643 1020891 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:22:44.623650 1020891 fix.go:54] fixHost starting: 
	I1120 22:22:44.624004 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:44.642352 1020891 fix.go:112] recreateIfNeeded on old-k8s-version-443192: state=Stopped err=<nil>
	W1120 22:22:44.642383 1020891 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 22:22:44.645586 1020891 out.go:252] * Restarting existing docker container for "old-k8s-version-443192" ...
	I1120 22:22:44.645674 1020891 cli_runner.go:164] Run: docker start old-k8s-version-443192
	I1120 22:22:44.930668 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:44.954024 1020891 kic.go:430] container "old-k8s-version-443192" state is running.
	I1120 22:22:44.954551 1020891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-443192
	I1120 22:22:44.980097 1020891 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/config.json ...
	I1120 22:22:44.980937 1020891 machine.go:94] provisionDockerMachine start ...
	I1120 22:22:44.981025 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:45.004082 1020891 main.go:143] libmachine: Using SSH client type: native
	I1120 22:22:45.004434 1020891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1120 22:22:45.004445 1020891 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:22:45.005269 1020891 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:22:48.154693 1020891 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-443192
	
	I1120 22:22:48.154718 1020891 ubuntu.go:182] provisioning hostname "old-k8s-version-443192"
	I1120 22:22:48.154789 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:48.173312 1020891 main.go:143] libmachine: Using SSH client type: native
	I1120 22:22:48.173740 1020891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1120 22:22:48.173759 1020891 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-443192 && echo "old-k8s-version-443192" | sudo tee /etc/hostname
	I1120 22:22:48.324379 1020891 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-443192
	
	I1120 22:22:48.324484 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:48.343041 1020891 main.go:143] libmachine: Using SSH client type: native
	I1120 22:22:48.343359 1020891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1120 22:22:48.343381 1020891 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-443192' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-443192/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-443192' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:22:48.487310 1020891 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:22:48.487348 1020891 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:22:48.487381 1020891 ubuntu.go:190] setting up certificates
	I1120 22:22:48.487392 1020891 provision.go:84] configureAuth start
	I1120 22:22:48.487454 1020891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-443192
	I1120 22:22:48.505335 1020891 provision.go:143] copyHostCerts
	I1120 22:22:48.505411 1020891 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:22:48.505426 1020891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:22:48.505503 1020891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:22:48.505610 1020891 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:22:48.505621 1020891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:22:48.505649 1020891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:22:48.505716 1020891 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:22:48.505724 1020891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:22:48.505751 1020891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:22:48.505813 1020891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-443192 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-443192]
	I1120 22:22:48.614219 1020891 provision.go:177] copyRemoteCerts
	I1120 22:22:48.614292 1020891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:22:48.614338 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:48.632020 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:48.735402 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:22:48.755534 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1120 22:22:48.775604 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 22:22:48.794602 1020891 provision.go:87] duration metric: took 307.185397ms to configureAuth
	I1120 22:22:48.794625 1020891 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:22:48.794814 1020891 config.go:182] Loaded profile config "old-k8s-version-443192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 22:22:48.794916 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:48.818932 1020891 main.go:143] libmachine: Using SSH client type: native
	I1120 22:22:48.819334 1020891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1120 22:22:48.819403 1020891 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:22:49.184920 1020891 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:22:49.184953 1020891 machine.go:97] duration metric: took 4.204000561s to provisionDockerMachine
	I1120 22:22:49.184965 1020891 start.go:293] postStartSetup for "old-k8s-version-443192" (driver="docker")
	I1120 22:22:49.184975 1020891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:22:49.185035 1020891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:22:49.185088 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:49.204436 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:49.307151 1020891 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:22:49.310381 1020891 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:22:49.310414 1020891 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:22:49.310426 1020891 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:22:49.310481 1020891 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:22:49.310568 1020891 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:22:49.310686 1020891 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:22:49.318229 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:22:49.336998 1020891 start.go:296] duration metric: took 152.016752ms for postStartSetup
	I1120 22:22:49.337120 1020891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:22:49.337207 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:49.355474 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:49.452855 1020891 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:22:49.457974 1020891 fix.go:56] duration metric: took 4.834316045s for fixHost
	I1120 22:22:49.458001 1020891 start.go:83] releasing machines lock for "old-k8s-version-443192", held for 4.834370371s
	I1120 22:22:49.458082 1020891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-443192
	I1120 22:22:49.476110 1020891 ssh_runner.go:195] Run: cat /version.json
	I1120 22:22:49.476163 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:49.476163 1020891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:22:49.476225 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:49.495834 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:49.498712 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:49.598793 1020891 ssh_runner.go:195] Run: systemctl --version
	I1120 22:22:49.694222 1020891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:22:49.731899 1020891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:22:49.736318 1020891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:22:49.736466 1020891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:22:49.744936 1020891 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:22:49.744961 1020891 start.go:496] detecting cgroup driver to use...
	I1120 22:22:49.744993 1020891 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:22:49.745058 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:22:49.761323 1020891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:22:49.775082 1020891 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:22:49.775146 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:22:49.790489 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:22:49.804844 1020891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:22:49.945243 1020891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:22:50.077046 1020891 docker.go:234] disabling docker service ...
	I1120 22:22:50.077197 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:22:50.095088 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:22:50.109764 1020891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:22:50.241604 1020891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:22:50.367805 1020891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:22:50.382123 1020891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:22:50.397431 1020891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1120 22:22:50.397492 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.408084 1020891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:22:50.408152 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.417650 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.427111 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.436837 1020891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:22:50.445353 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.455897 1020891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.464827 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.474433 1020891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:22:50.484110 1020891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:22:50.493511 1020891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:22:50.613212 1020891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:22:50.786089 1020891 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:22:50.786171 1020891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:22:50.789981 1020891 start.go:564] Will wait 60s for crictl version
	I1120 22:22:50.790045 1020891 ssh_runner.go:195] Run: which crictl
	I1120 22:22:50.793540 1020891 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:22:50.825692 1020891 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:22:50.825843 1020891 ssh_runner.go:195] Run: crio --version
	I1120 22:22:50.865186 1020891 ssh_runner.go:195] Run: crio --version
	I1120 22:22:50.899338 1020891 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1120 22:22:50.902230 1020891 cli_runner.go:164] Run: docker network inspect old-k8s-version-443192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:22:50.918852 1020891 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:22:50.922888 1020891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:22:50.933738 1020891 kubeadm.go:884] updating cluster {Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:22:50.933862 1020891 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 22:22:50.933919 1020891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:22:50.969206 1020891 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:22:50.969234 1020891 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:22:50.969291 1020891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:22:50.999257 1020891 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:22:50.999281 1020891 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:22:50.999288 1020891 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1120 22:22:50.999389 1020891 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-443192 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:22:50.999468 1020891 ssh_runner.go:195] Run: crio config
	I1120 22:22:51.055672 1020891 cni.go:84] Creating CNI manager for ""
	I1120 22:22:51.055697 1020891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:22:51.055715 1020891 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:22:51.055738 1020891 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-443192 NodeName:old-k8s-version-443192 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:22:51.055885 1020891 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-443192"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:22:51.055965 1020891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1120 22:22:51.064414 1020891 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:22:51.064575 1020891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:22:51.072703 1020891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1120 22:22:51.087245 1020891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:22:51.101671 1020891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1120 22:22:51.116016 1020891 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:22:51.120279 1020891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:22:51.131776 1020891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:22:51.255662 1020891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:22:51.271757 1020891 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192 for IP: 192.168.85.2
	I1120 22:22:51.271780 1020891 certs.go:195] generating shared ca certs ...
	I1120 22:22:51.271831 1020891 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:51.272006 1020891 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:22:51.272084 1020891 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:22:51.272098 1020891 certs.go:257] generating profile certs ...
	I1120 22:22:51.272233 1020891 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.key
	I1120 22:22:51.272329 1020891 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key.3493d06e
	I1120 22:22:51.272396 1020891 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.key
	I1120 22:22:51.272542 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:22:51.272594 1020891 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:22:51.272609 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:22:51.272637 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:22:51.272690 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:22:51.272726 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:22:51.272824 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:22:51.273510 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:22:51.297325 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:22:51.317556 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:22:51.338774 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:22:51.360333 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 22:22:51.380972 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 22:22:51.403738 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:22:51.432212 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:22:51.460422 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:22:51.479406 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:22:51.498296 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:22:51.518273 1020891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:22:51.531969 1020891 ssh_runner.go:195] Run: openssl version
	I1120 22:22:51.538355 1020891 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:22:51.546627 1020891 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:22:51.555245 1020891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:22:51.559566 1020891 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:22:51.559678 1020891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:22:51.601914 1020891 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:22:51.610292 1020891 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:22:51.618046 1020891 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:22:51.629840 1020891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:22:51.633800 1020891 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:22:51.633873 1020891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:22:51.676429 1020891 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:22:51.683679 1020891 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:22:51.690876 1020891 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:22:51.698397 1020891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:22:51.702244 1020891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:22:51.702308 1020891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:22:51.743160 1020891 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:22:51.750658 1020891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:22:51.754416 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:22:51.795375 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:22:51.837468 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:22:51.878566 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:22:51.942508 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:22:52.007936 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:22:52.066374 1020891 kubeadm.go:401] StartCluster: {Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:22:52.066519 1020891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:22:52.066616 1020891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:22:52.149758 1020891 cri.go:89] found id: "08baca71437157118a7d970659bacffc613ba230c7a81cfca8a55f5bef63bb1d"
	I1120 22:22:52.149829 1020891 cri.go:89] found id: "d30d232b1913bbcbf830559cf3873ada098fe3c7afcd389ba988f881f71008b4"
	I1120 22:22:52.149847 1020891 cri.go:89] found id: "9dcca088872de456ae574afdbd29f48077afe4c8f371c0f6fa7c77bceae2bfc9"
	I1120 22:22:52.149867 1020891 cri.go:89] found id: "0eb106aae6e3d943cefbdd723b0bbb278166cfebbd506495a02bbd34185a3502"
	I1120 22:22:52.149903 1020891 cri.go:89] found id: ""
	I1120 22:22:52.149974 1020891 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:22:52.170210 1020891 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:22:52Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:22:52.170362 1020891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:22:52.186140 1020891 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:22:52.186207 1020891 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:22:52.186297 1020891 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:22:52.201800 1020891 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:22:52.202456 1020891 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-443192" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:22:52.202760 1020891 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-443192" cluster setting kubeconfig missing "old-k8s-version-443192" context setting]
	I1120 22:22:52.203352 1020891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:52.204938 1020891 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:22:52.218608 1020891 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1120 22:22:52.218700 1020891 kubeadm.go:602] duration metric: took 32.473009ms to restartPrimaryControlPlane
	I1120 22:22:52.218726 1020891 kubeadm.go:403] duration metric: took 152.363341ms to StartCluster
	I1120 22:22:52.218766 1020891 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:52.218847 1020891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:22:52.219956 1020891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:52.220238 1020891 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:22:52.220625 1020891 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:22:52.220708 1020891 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-443192"
	I1120 22:22:52.220718 1020891 addons.go:70] Setting dashboard=true in profile "old-k8s-version-443192"
	I1120 22:22:52.220734 1020891 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-443192"
	W1120 22:22:52.220741 1020891 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:22:52.220765 1020891 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:22:52.220796 1020891 addons.go:239] Setting addon dashboard=true in "old-k8s-version-443192"
	W1120 22:22:52.220807 1020891 addons.go:248] addon dashboard should already be in state true
	I1120 22:22:52.220834 1020891 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:22:52.221241 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:52.221297 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:52.224441 1020891 config.go:182] Loaded profile config "old-k8s-version-443192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 22:22:52.224535 1020891 out.go:179] * Verifying Kubernetes components...
	I1120 22:22:52.224783 1020891 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-443192"
	I1120 22:22:52.224823 1020891 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-443192"
	I1120 22:22:52.225162 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:52.230452 1020891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:22:52.278473 1020891 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-443192"
	W1120 22:22:52.278495 1020891 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:22:52.278520 1020891 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:22:52.278937 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:52.282768 1020891 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:22:52.282884 1020891 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:22:52.289307 1020891 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:22:52.289409 1020891 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:22:52.289419 1020891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:22:52.289481 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:52.293261 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:22:52.293286 1020891 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:22:52.293353 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:52.329110 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:52.335951 1020891 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:22:52.335972 1020891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:22:52.336040 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:52.362619 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:52.376121 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:52.594878 1020891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:22:52.623471 1020891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:22:52.644224 1020891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:22:52.645723 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:22:52.645744 1020891 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:22:52.652237 1020891 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-443192" to be "Ready" ...
	I1120 22:22:52.696979 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:22:52.697015 1020891 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:22:52.810223 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:22:52.810296 1020891 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:22:52.883878 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:22:52.883943 1020891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:22:52.932774 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:22:52.932863 1020891 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:22:52.962601 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:22:52.962684 1020891 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:22:52.987954 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:22:52.988027 1020891 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:22:53.020858 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:22:53.020933 1020891 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:22:53.045591 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:22:53.045665 1020891 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:22:53.068309 1020891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:22:57.090712 1020891 node_ready.go:49] node "old-k8s-version-443192" is "Ready"
	I1120 22:22:57.090789 1020891 node_ready.go:38] duration metric: took 4.438507258s for node "old-k8s-version-443192" to be "Ready" ...
	I1120 22:22:57.090818 1020891 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:22:57.090907 1020891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:22:58.759551 1020891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.136041596s)
	I1120 22:22:58.759603 1020891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.115356194s)
	I1120 22:22:59.310058 1020891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.241655313s)
	I1120 22:22:59.310089 1020891 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.219121938s)
	I1120 22:22:59.310284 1020891 api_server.go:72] duration metric: took 7.089991015s to wait for apiserver process to appear ...
	I1120 22:22:59.310293 1020891 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:22:59.310314 1020891 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:22:59.313474 1020891 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-443192 addons enable metrics-server
	
	I1120 22:22:59.316001 1020891 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1120 22:22:59.319843 1020891 addons.go:515] duration metric: took 7.099197575s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1120 22:22:59.324682 1020891 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:22:59.326283 1020891 api_server.go:141] control plane version: v1.28.0
	I1120 22:22:59.326355 1020891 api_server.go:131] duration metric: took 16.040888ms to wait for apiserver health ...
	I1120 22:22:59.326380 1020891 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:22:59.331734 1020891 system_pods.go:59] 8 kube-system pods found
	I1120 22:22:59.331822 1020891 system_pods.go:61] "coredns-5dd5756b68-q7jgh" [b00478d4-df59-4e3b-9e06-d6dc59c4430f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:22:59.331861 1020891 system_pods.go:61] "etcd-old-k8s-version-443192" [c30065df-9ec7-453e-b779-96af2c2f8730] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:22:59.331887 1020891 system_pods.go:61] "kindnet-ch2km" [960a21f2-f0bc-4d3e-a058-91b7d45a0d7b] Running
	I1120 22:22:59.331915 1020891 system_pods.go:61] "kube-apiserver-old-k8s-version-443192" [b64a6e1f-7c43-4917-95a9-923853091074] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:22:59.331951 1020891 system_pods.go:61] "kube-controller-manager-old-k8s-version-443192" [4ba54de8-17f5-4a0d-b5a3-a8d0c8c5931a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:22:59.331976 1020891 system_pods.go:61] "kube-proxy-srvjx" [46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0] Running
	I1120 22:22:59.332000 1020891 system_pods.go:61] "kube-scheduler-old-k8s-version-443192" [945b7ba2-b725-420b-b25e-eddc4e56bb75] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:22:59.332044 1020891 system_pods.go:61] "storage-provisioner" [8f6e35f9-c59f-4a38-b658-c7acf5d0df1b] Running
	I1120 22:22:59.332066 1020891 system_pods.go:74] duration metric: took 5.66614ms to wait for pod list to return data ...
	I1120 22:22:59.332102 1020891 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:22:59.334932 1020891 default_sa.go:45] found service account: "default"
	I1120 22:22:59.335038 1020891 default_sa.go:55] duration metric: took 2.910659ms for default service account to be created ...
	I1120 22:22:59.335065 1020891 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:22:59.338783 1020891 system_pods.go:86] 8 kube-system pods found
	I1120 22:22:59.338818 1020891 system_pods.go:89] "coredns-5dd5756b68-q7jgh" [b00478d4-df59-4e3b-9e06-d6dc59c4430f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:22:59.338827 1020891 system_pods.go:89] "etcd-old-k8s-version-443192" [c30065df-9ec7-453e-b779-96af2c2f8730] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:22:59.338834 1020891 system_pods.go:89] "kindnet-ch2km" [960a21f2-f0bc-4d3e-a058-91b7d45a0d7b] Running
	I1120 22:22:59.338841 1020891 system_pods.go:89] "kube-apiserver-old-k8s-version-443192" [b64a6e1f-7c43-4917-95a9-923853091074] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:22:59.338847 1020891 system_pods.go:89] "kube-controller-manager-old-k8s-version-443192" [4ba54de8-17f5-4a0d-b5a3-a8d0c8c5931a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:22:59.338853 1020891 system_pods.go:89] "kube-proxy-srvjx" [46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0] Running
	I1120 22:22:59.338861 1020891 system_pods.go:89] "kube-scheduler-old-k8s-version-443192" [945b7ba2-b725-420b-b25e-eddc4e56bb75] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:22:59.338869 1020891 system_pods.go:89] "storage-provisioner" [8f6e35f9-c59f-4a38-b658-c7acf5d0df1b] Running
	I1120 22:22:59.338878 1020891 system_pods.go:126] duration metric: took 3.792583ms to wait for k8s-apps to be running ...
	I1120 22:22:59.338891 1020891 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:22:59.338951 1020891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:22:59.352449 1020891 system_svc.go:56] duration metric: took 13.548245ms WaitForService to wait for kubelet
	I1120 22:22:59.352478 1020891 kubeadm.go:587] duration metric: took 7.132186282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:22:59.352498 1020891 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:22:59.355702 1020891 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:22:59.355738 1020891 node_conditions.go:123] node cpu capacity is 2
	I1120 22:22:59.355752 1020891 node_conditions.go:105] duration metric: took 3.248715ms to run NodePressure ...
	I1120 22:22:59.355769 1020891 start.go:242] waiting for startup goroutines ...
	I1120 22:22:59.355780 1020891 start.go:247] waiting for cluster config update ...
	I1120 22:22:59.355791 1020891 start.go:256] writing updated cluster config ...
	I1120 22:22:59.356089 1020891 ssh_runner.go:195] Run: rm -f paused
	I1120 22:22:59.359745 1020891 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:22:59.364009 1020891 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-q7jgh" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:23:01.369854 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:03.370186 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:05.370616 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:07.870311 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:09.871231 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:12.371111 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:14.870465 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:16.870794 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:18.871449 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:21.370931 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:23.869596 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:25.870422 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:28.371679 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	I1120 22:23:29.870375 1020891 pod_ready.go:94] pod "coredns-5dd5756b68-q7jgh" is "Ready"
	I1120 22:23:29.870407 1020891 pod_ready.go:86] duration metric: took 30.506370796s for pod "coredns-5dd5756b68-q7jgh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.873549 1020891 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.878857 1020891 pod_ready.go:94] pod "etcd-old-k8s-version-443192" is "Ready"
	I1120 22:23:29.878888 1020891 pod_ready.go:86] duration metric: took 5.310535ms for pod "etcd-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.882038 1020891 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.887289 1020891 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-443192" is "Ready"
	I1120 22:23:29.887317 1020891 pod_ready.go:86] duration metric: took 5.251596ms for pod "kube-apiserver-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.890500 1020891 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:30.077385 1020891 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-443192" is "Ready"
	I1120 22:23:30.077420 1020891 pod_ready.go:86] duration metric: took 186.892047ms for pod "kube-controller-manager-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:30.269696 1020891 pod_ready.go:83] waiting for pod "kube-proxy-srvjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:30.668239 1020891 pod_ready.go:94] pod "kube-proxy-srvjx" is "Ready"
	I1120 22:23:30.668268 1020891 pod_ready.go:86] duration metric: took 398.54114ms for pod "kube-proxy-srvjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:30.869073 1020891 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:31.268847 1020891 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-443192" is "Ready"
	I1120 22:23:31.268882 1020891 pod_ready.go:86] duration metric: took 399.781016ms for pod "kube-scheduler-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:31.268895 1020891 pod_ready.go:40] duration metric: took 31.909119901s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:23:31.324704 1020891 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1120 22:23:31.327899 1020891 out.go:203] 
	W1120 22:23:31.330744 1020891 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1120 22:23:31.333608 1020891 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1120 22:23:31.336541 1020891 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-443192" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 22:23:21 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:21.678343595Z" level=info msg="Removed container 72a44c3aadfc4214d966af9022d93aab58fd3e084fdf5958a2b85c0021619366: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs/dashboard-metrics-scraper" id=69e3917e-6d1d-4262-bda9-45e30cc16b97 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:23:28 old-k8s-version-443192 conmon[1145]: conmon 9985fcead7c1c65a99bb <ninfo>: container 1155 exited with status 1
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.669981681Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b2133167-f8c0-4bf4-8a25-3f35542f4d16 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.671757843Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b2541dfa-1681-4263-9772-f3c8e044386d name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.672626071Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b936a84e-bd34-4316-b421-30b7fb3fa0c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.672769942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.677936146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.678209938Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5d473324a82226ce26c6c79378e70d03463ec005f439f70eb712349054c3724e/merged/etc/passwd: no such file or directory"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.678235588Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5d473324a82226ce26c6c79378e70d03463ec005f439f70eb712349054c3724e/merged/etc/group: no such file or directory"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.678473343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.704473727Z" level=info msg="Created container a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36: kube-system/storage-provisioner/storage-provisioner" id=b936a84e-bd34-4316-b421-30b7fb3fa0c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.705785538Z" level=info msg="Starting container: a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36" id=4ab45e5c-34a6-4982-a1da-a199ef849db7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.708212072Z" level=info msg="Started container" PID=1633 containerID=a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36 description=kube-system/storage-provisioner/storage-provisioner id=4ab45e5c-34a6-4982-a1da-a199ef849db7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ffd0babd15674634c8caa2e125565a77ff2b5f6393b27217e4f983ae5a7be78a
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.418151848Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.42452985Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.424566642Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.424592267Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.427928343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.4279615Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.427977771Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.431591972Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.431628469Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.431653339Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.434887932Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.434927908Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a188c4e4fdda0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   ffd0babd15674       storage-provisioner                              kube-system
	9741b34fa9e85       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   1                   1ac7a300950d4       dashboard-metrics-scraper-5f989dc9cf-pppjs       kubernetes-dashboard
	09cd542a1b678       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   1b57c879c91c6       kubernetes-dashboard-8694d4445c-pvh8p            kubernetes-dashboard
	68468b5e3bffb       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           48 seconds ago      Running             coredns                     1                   ec2dadf3a1066       coredns-5dd5756b68-q7jgh                         kube-system
	babee24b5f037       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   3c3780f139d5f       kindnet-ch2km                                    kube-system
	0177ef5bb9c7f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   c1edbfba94e55       busybox                                          default
	1576991fff11e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           48 seconds ago      Running             kube-proxy                  1                   dbb62011da372       kube-proxy-srvjx                                 kube-system
	9985fcead7c1c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   ffd0babd15674       storage-provisioner                              kube-system
	08baca7143715       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           54 seconds ago      Running             kube-apiserver              1                   27ca5159eb488       kube-apiserver-old-k8s-version-443192            kube-system
	d30d232b1913b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           54 seconds ago      Running             kube-controller-manager     1                   081357e4bc2fd       kube-controller-manager-old-k8s-version-443192   kube-system
	9dcca088872de       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           54 seconds ago      Running             etcd                        1                   69a3573648b0b       etcd-old-k8s-version-443192                      kube-system
	0eb106aae6e3d       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           54 seconds ago      Running             kube-scheduler              1                   b17e087980e9d       kube-scheduler-old-k8s-version-443192            kube-system
	
	
	==> coredns [68468b5e3bffbe45e05a07c014a98788897a7948c744fc6aa4b3b47a96e34963] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59211 - 64146 "HINFO IN 2822090401044257068.8793553617202448944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024308547s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-443192
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-443192
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-443192
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_21_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-443192
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:23:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:23:27 +0000   Thu, 20 Nov 2025 22:21:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:23:27 +0000   Thu, 20 Nov 2025 22:21:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:23:27 +0000   Thu, 20 Nov 2025 22:21:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:23:27 +0000   Thu, 20 Nov 2025 22:22:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-443192
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                25366f85-c45a-4699-899a-6aa1d4483da7
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-5dd5756b68-q7jgh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-443192                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-ch2km                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-443192             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-443192    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-srvjx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-443192             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-pppjs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pvh8p             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-443192 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node old-k8s-version-443192 event: Registered Node old-k8s-version-443192 in Controller
	  Normal  NodeReady                89s                kubelet          Node old-k8s-version-443192 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node old-k8s-version-443192 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node old-k8s-version-443192 event: Registered Node old-k8s-version-443192 in Controller
	
	
	==> dmesg <==
	[Nov20 21:54] overlayfs: idmapped layers are currently not supported
	[Nov20 21:59] overlayfs: idmapped layers are currently not supported
	[Nov20 22:00] overlayfs: idmapped layers are currently not supported
	[Nov20 22:01] overlayfs: idmapped layers are currently not supported
	[Nov20 22:02] overlayfs: idmapped layers are currently not supported
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9dcca088872de456ae574afdbd29f48077afe4c8f371c0f6fa7c77bceae2bfc9] <==
	{"level":"info","ts":"2025-11-20T22:22:52.297096Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T22:22:52.297149Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T22:22:52.297484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-20T22:22:52.297603Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-20T22:22:52.297969Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T22:22:52.298077Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T22:22:52.373946Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T22:22:52.374062Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T22:22:52.374072Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T22:22:52.387228Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T22:22:52.387293Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T22:22:54.11643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-20T22:22:54.116539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-20T22:22:54.116588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T22:22:54.116627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-20T22:22:54.11666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-20T22:22:54.116704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-20T22:22:54.116737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-20T22:22:54.119564Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-443192 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T22:22:54.119658Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T22:22:54.121221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T22:22:54.121492Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T22:22:54.123549Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-20T22:22:54.148844Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T22:22:54.148955Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:23:46 up  5:05,  0 user,  load average: 1.81, 3.00, 2.47
	Linux old-k8s-version-443192 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [babee24b5f037d1430cee3e96ac245ea580fe5c334e85189c98eda6e2c23ee2f] <==
	I1120 22:22:58.219965       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:22:58.220183       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:22:58.220317       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:22:58.220329       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:22:58.220341       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:22:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:22:58.411994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:22:58.412014       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:22:58.412022       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:22:58.412303       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:23:28.411623       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:23:28.412569       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:23:28.412636       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:23:28.414918       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 22:23:29.712919       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:23:29.712949       1 metrics.go:72] Registering metrics
	I1120 22:23:29.713019       1 controller.go:711] "Syncing nftables rules"
	I1120 22:23:38.417800       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:23:38.417858       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08baca71437157118a7d970659bacffc613ba230c7a81cfca8a55f5bef63bb1d] <==
	I1120 22:22:57.069461       1 shared_informer.go:318] Caches are synced for configmaps
	I1120 22:22:57.071899       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 22:22:57.078353       1 aggregator.go:166] initial CRD sync complete...
	I1120 22:22:57.078474       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 22:22:57.078604       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:22:57.078659       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:22:57.088705       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1120 22:22:57.091943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1120 22:22:57.092867       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1120 22:22:57.092887       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1120 22:22:57.092974       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 22:22:57.116549       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:22:57.147433       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1120 22:22:57.246616       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 22:22:57.932120       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:22:59.084792       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 22:22:59.128993       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 22:22:59.163597       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:22:59.177606       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:22:59.190854       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 22:22:59.258018       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.97.186"}
	I1120 22:22:59.301984       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.195.65"}
	I1120 22:23:10.116583       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:23:10.130871       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1120 22:23:10.213252       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d30d232b1913bbcbf830559cf3873ada098fe3c7afcd389ba988f881f71008b4] <==
	I1120 22:23:10.211746       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 22:23:10.219241       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1120 22:23:10.220994       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-pppjs"
	I1120 22:23:10.221104       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-pvh8p"
	I1120 22:23:10.230769       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 22:23:10.243702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.097794ms"
	I1120 22:23:10.244031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="92.166356ms"
	I1120 22:23:10.252749       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1120 22:23:10.255738       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1120 22:23:10.263320       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.918661ms"
	I1120 22:23:10.263476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.606µs"
	I1120 22:23:10.263553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.455712ms"
	I1120 22:23:10.263960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.84µs"
	I1120 22:23:10.272213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="92.441µs"
	I1120 22:23:10.286715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.373µs"
	I1120 22:23:10.555105       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 22:23:10.555134       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 22:23:10.571274       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 22:23:15.649652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.914879ms"
	I1120 22:23:15.650234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.371µs"
	I1120 22:23:20.664681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.375µs"
	I1120 22:23:21.670547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.507µs"
	I1120 22:23:22.670055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="91.857µs"
	I1120 22:23:29.553569       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.917005ms"
	I1120 22:23:29.553770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.939µs"
	
	
	==> kube-proxy [1576991fff11eb3845a8a4cb002efe82a207403e30e19f8f6299ed0c313b4ac8] <==
	I1120 22:22:58.349325       1 server_others.go:69] "Using iptables proxy"
	I1120 22:22:58.395430       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1120 22:22:58.578767       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:22:58.581198       1 server_others.go:152] "Using iptables Proxier"
	I1120 22:22:58.581236       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 22:22:58.581245       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 22:22:58.581277       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 22:22:58.581486       1 server.go:846] "Version info" version="v1.28.0"
	I1120 22:22:58.581495       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:22:58.589674       1 config.go:188] "Starting service config controller"
	I1120 22:22:58.589700       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 22:22:58.589725       1 config.go:97] "Starting endpoint slice config controller"
	I1120 22:22:58.589729       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 22:22:58.590198       1 config.go:315] "Starting node config controller"
	I1120 22:22:58.590205       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 22:22:58.690756       1 shared_informer.go:318] Caches are synced for node config
	I1120 22:22:58.690785       1 shared_informer.go:318] Caches are synced for service config
	I1120 22:22:58.690810       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0eb106aae6e3d943cefbdd723b0bbb278166cfebbd506495a02bbd34185a3502] <==
	I1120 22:22:54.051573       1 serving.go:348] Generated self-signed cert in-memory
	W1120 22:22:56.921197       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 22:22:56.921311       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 22:22:56.921345       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 22:22:56.921385       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 22:22:57.125598       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1120 22:22:57.125697       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:22:57.132284       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:22:57.132341       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1120 22:22:57.134889       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1120 22:22:57.135240       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1120 22:22:57.233576       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 22:22:57 old-k8s-version-443192 kubelet[781]: I1120 22:22:57.494007     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0-xtables-lock\") pod \"kube-proxy-srvjx\" (UID: \"46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0\") " pod="kube-system/kube-proxy-srvjx"
	Nov 20 22:22:57 old-k8s-version-443192 kubelet[781]: I1120 22:22:57.494146     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0-lib-modules\") pod \"kube-proxy-srvjx\" (UID: \"46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0\") " pod="kube-system/kube-proxy-srvjx"
	Nov 20 22:22:57 old-k8s-version-443192 kubelet[781]: W1120 22:22:57.843556     781 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/crio-3c3780f139d5f9859c91bb1a6e44edaad8e5e00b10286888219e6678a6aad19b WatchSource:0}: Error finding container 3c3780f139d5f9859c91bb1a6e44edaad8e5e00b10286888219e6678a6aad19b: Status 404 returned error can't find the container with id 3c3780f139d5f9859c91bb1a6e44edaad8e5e00b10286888219e6678a6aad19b
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.237271     781 topology_manager.go:215] "Topology Admit Handler" podUID="b209a8ed-1146-4cc3-b47f-f5481b75bb98" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-pppjs"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.238497     781 topology_manager.go:215] "Topology Admit Handler" podUID="c6b6317d-6005-4477-ae37-06c8f92438a3" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-pvh8p"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.395568     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b209a8ed-1146-4cc3-b47f-f5481b75bb98-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-pppjs\" (UID: \"b209a8ed-1146-4cc3-b47f-f5481b75bb98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.395640     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrxvz\" (UniqueName: \"kubernetes.io/projected/c6b6317d-6005-4477-ae37-06c8f92438a3-kube-api-access-zrxvz\") pod \"kubernetes-dashboard-8694d4445c-pvh8p\" (UID: \"c6b6317d-6005-4477-ae37-06c8f92438a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvh8p"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.395671     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbw5k\" (UniqueName: \"kubernetes.io/projected/b209a8ed-1146-4cc3-b47f-f5481b75bb98-kube-api-access-rbw5k\") pod \"dashboard-metrics-scraper-5f989dc9cf-pppjs\" (UID: \"b209a8ed-1146-4cc3-b47f-f5481b75bb98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.395697     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c6b6317d-6005-4477-ae37-06c8f92438a3-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-pvh8p\" (UID: \"c6b6317d-6005-4477-ae37-06c8f92438a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvh8p"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: W1120 22:23:10.562697     781 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/crio-1b57c879c91c691e980a22768b1eea601b538bd4e32143cb9db638028c01c1f7 WatchSource:0}: Error finding container 1b57c879c91c691e980a22768b1eea601b538bd4e32143cb9db638028c01c1f7: Status 404 returned error can't find the container with id 1b57c879c91c691e980a22768b1eea601b538bd4e32143cb9db638028c01c1f7
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: W1120 22:23:10.594210     781 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/crio-1ac7a300950d4669d53d6178727ceb74af581736cb14bffd5d794b5be7b7e2ac WatchSource:0}: Error finding container 1ac7a300950d4669d53d6178727ceb74af581736cb14bffd5d794b5be7b7e2ac: Status 404 returned error can't find the container with id 1ac7a300950d4669d53d6178727ceb74af581736cb14bffd5d794b5be7b7e2ac
	Nov 20 22:23:20 old-k8s-version-443192 kubelet[781]: I1120 22:23:20.647580     781 scope.go:117] "RemoveContainer" containerID="72a44c3aadfc4214d966af9022d93aab58fd3e084fdf5958a2b85c0021619366"
	Nov 20 22:23:20 old-k8s-version-443192 kubelet[781]: I1120 22:23:20.664477     781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvh8p" podStartSLOduration=5.949281957 podCreationTimestamp="2025-11-20 22:23:10 +0000 UTC" firstStartedPulling="2025-11-20 22:23:10.567847107 +0000 UTC m=+19.293252846" lastFinishedPulling="2025-11-20 22:23:15.28297052 +0000 UTC m=+24.008376276" observedRunningTime="2025-11-20 22:23:15.627014376 +0000 UTC m=+24.352420149" watchObservedRunningTime="2025-11-20 22:23:20.664405387 +0000 UTC m=+29.389811184"
	Nov 20 22:23:21 old-k8s-version-443192 kubelet[781]: I1120 22:23:21.651448     781 scope.go:117] "RemoveContainer" containerID="9741b34fa9e85d148668cddd6abf917c4a6913a3797e2d161bad72d3fe8eb477"
	Nov 20 22:23:21 old-k8s-version-443192 kubelet[781]: I1120 22:23:21.651965     781 scope.go:117] "RemoveContainer" containerID="72a44c3aadfc4214d966af9022d93aab58fd3e084fdf5958a2b85c0021619366"
	Nov 20 22:23:21 old-k8s-version-443192 kubelet[781]: E1120 22:23:21.662881     781 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pppjs_kubernetes-dashboard(b209a8ed-1146-4cc3-b47f-f5481b75bb98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs" podUID="b209a8ed-1146-4cc3-b47f-f5481b75bb98"
	Nov 20 22:23:22 old-k8s-version-443192 kubelet[781]: I1120 22:23:22.654288     781 scope.go:117] "RemoveContainer" containerID="9741b34fa9e85d148668cddd6abf917c4a6913a3797e2d161bad72d3fe8eb477"
	Nov 20 22:23:22 old-k8s-version-443192 kubelet[781]: E1120 22:23:22.654573     781 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pppjs_kubernetes-dashboard(b209a8ed-1146-4cc3-b47f-f5481b75bb98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs" podUID="b209a8ed-1146-4cc3-b47f-f5481b75bb98"
	Nov 20 22:23:28 old-k8s-version-443192 kubelet[781]: I1120 22:23:28.669296     781 scope.go:117] "RemoveContainer" containerID="9985fcead7c1c65a99bb4a4836cdf63884e4e8a07114be23b3c00a042c12d29e"
	Nov 20 22:23:30 old-k8s-version-443192 kubelet[781]: I1120 22:23:30.540773     781 scope.go:117] "RemoveContainer" containerID="9741b34fa9e85d148668cddd6abf917c4a6913a3797e2d161bad72d3fe8eb477"
	Nov 20 22:23:30 old-k8s-version-443192 kubelet[781]: E1120 22:23:30.541095     781 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pppjs_kubernetes-dashboard(b209a8ed-1146-4cc3-b47f-f5481b75bb98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs" podUID="b209a8ed-1146-4cc3-b47f-f5481b75bb98"
	Nov 20 22:23:43 old-k8s-version-443192 kubelet[781]: I1120 22:23:43.631432     781 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 20 22:23:43 old-k8s-version-443192 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:23:43 old-k8s-version-443192 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:23:43 old-k8s-version-443192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [09cd542a1b6789791215f9991090113aa07db1c1dd6155ecc1a82452ba0a9b66] <==
	2025/11/20 22:23:15 Starting overwatch
	2025/11/20 22:23:15 Using namespace: kubernetes-dashboard
	2025/11/20 22:23:15 Using in-cluster config to connect to apiserver
	2025/11/20 22:23:15 Using secret token for csrf signing
	2025/11/20 22:23:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 22:23:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 22:23:15 Successful initial request to the apiserver, version: v1.28.0
	2025/11/20 22:23:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 22:23:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 22:23:15 Generating JWE encryption key
	2025/11/20 22:23:16 Initializing JWE encryption key from synchronized object
	2025/11/20 22:23:16 Creating in-cluster Sidecar client
	2025/11/20 22:23:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:23:16 Serving insecurely on HTTP port: 9090
	2025/11/20 22:23:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9985fcead7c1c65a99bb4a4836cdf63884e4e8a07114be23b3c00a042c12d29e] <==
	I1120 22:22:58.307325       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 22:23:28.309164       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36] <==
	I1120 22:23:28.724604       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 22:23:28.738722       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:23:28.738864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 22:23:46.140861       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:23:46.144891       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc0d96b6-2eea-47ae-a652-17e46e27b3bc", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-443192_251ca71c-5de3-4e85-aba3-fcdf60277c2c became leader
	I1120 22:23:46.145052       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-443192_251ca71c-5de3-4e85-aba3-fcdf60277c2c!
	I1120 22:23:46.245806       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-443192_251ca71c-5de3-4e85-aba3-fcdf60277c2c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-443192 -n old-k8s-version-443192
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-443192 -n old-k8s-version-443192: exit status 2 (371.539462ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-443192 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-443192
helpers_test.go:243: (dbg) docker inspect old-k8s-version-443192:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753",
	        "Created": "2025-11-20T22:21:23.635114568Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021017,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:22:44.682781717Z",
	            "FinishedAt": "2025-11-20T22:22:43.859460237Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/hostname",
	        "HostsPath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/hosts",
	        "LogPath": "/var/lib/docker/containers/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753-json.log",
	        "Name": "/old-k8s-version-443192",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-443192:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-443192",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753",
	                "LowerDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47987c7e74f567420a768514335b2999858d9d631e215d3a2af49036037c60e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-443192",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-443192/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-443192",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-443192",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-443192",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2431d2fea12360a68455810c35eb44b387373c8b6c0b2224b02c1abd7057ffb7",
	            "SandboxKey": "/var/run/docker/netns/2431d2fea123",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34164"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-443192": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:78:52:57:12:9e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be8765199279f8eee237afe7c8b9f46458c0018ce58bf28750fa9832048503b9",
	                    "EndpointID": "cb9a2eee9a93fbb4be060164629245e9b7812d0e1bd3544ee7e2867f0eb3254c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-443192",
	                        "947acc53b1a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-443192 -n old-k8s-version-443192
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-443192 -n old-k8s-version-443192: exit status 2 (375.012073ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-443192 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-443192 logs -n 25: (1.405524719s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-640880 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo containerd config dump                                                                                                                                                                                                  │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo crio config                                                                                                                                                                                                             │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ delete  │ -p cilium-640880                                                                                                                                                                                                                              │ cilium-640880             │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │ 20 Nov 25 22:19 UTC │
	│ start   │ -p force-systemd-env-833370 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-833370  │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │ 20 Nov 25 22:20 UTC │
	│ delete  │ -p kubernetes-upgrade-410652                                                                                                                                                                                                                  │ kubernetes-upgrade-410652 │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420078    │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ delete  │ -p force-systemd-env-833370                                                                                                                                                                                                                   │ force-systemd-env-833370  │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-options-961311 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ cert-options-961311 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ -p cert-options-961311 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ delete  │ -p cert-options-961311                                                                                                                                                                                                                        │ cert-options-961311       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │                     │
	│ stop    │ -p old-k8s-version-443192 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-443192 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:23 UTC │
	│ image   │ old-k8s-version-443192 image list --format=json                                                                                                                                                                                               │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ pause   │ -p old-k8s-version-443192 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-443192    │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:22:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:22:44.394281 1020891 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:22:44.394475 1020891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:22:44.394505 1020891 out.go:374] Setting ErrFile to fd 2...
	I1120 22:22:44.394529 1020891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:22:44.394790 1020891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:22:44.395317 1020891 out.go:368] Setting JSON to false
	I1120 22:22:44.396309 1020891 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18290,"bootTime":1763659075,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:22:44.396413 1020891 start.go:143] virtualization:  
	I1120 22:22:44.399597 1020891 out.go:179] * [old-k8s-version-443192] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:22:44.403368 1020891 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:22:44.403454 1020891 notify.go:221] Checking for updates...
	I1120 22:22:44.409213 1020891 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:22:44.412128 1020891 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:22:44.415088 1020891 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:22:44.417906 1020891 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:22:44.420684 1020891 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:22:44.424258 1020891 config.go:182] Loaded profile config "old-k8s-version-443192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 22:22:44.427907 1020891 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1120 22:22:44.430789 1020891 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:22:44.469086 1020891 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:22:44.469216 1020891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:22:44.527632 1020891 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:22:44.517753918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:22:44.527750 1020891 docker.go:319] overlay module found
	I1120 22:22:44.531086 1020891 out.go:179] * Using the docker driver based on existing profile
	I1120 22:22:44.533983 1020891 start.go:309] selected driver: docker
	I1120 22:22:44.534004 1020891 start.go:930] validating driver "docker" against &{Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:22:44.534103 1020891 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:22:44.534838 1020891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:22:44.590107 1020891 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:22:44.58041212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:22:44.590440 1020891 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:22:44.590477 1020891 cni.go:84] Creating CNI manager for ""
	I1120 22:22:44.590538 1020891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:22:44.590582 1020891 start.go:353] cluster config:
	{Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:22:44.593891 1020891 out.go:179] * Starting "old-k8s-version-443192" primary control-plane node in "old-k8s-version-443192" cluster
	I1120 22:22:44.596732 1020891 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:22:44.599695 1020891 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:22:44.602610 1020891 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 22:22:44.602665 1020891 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1120 22:22:44.602678 1020891 cache.go:65] Caching tarball of preloaded images
	I1120 22:22:44.602680 1020891 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:22:44.602769 1020891 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:22:44.602797 1020891 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1120 22:22:44.602914 1020891 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/config.json ...
	I1120 22:22:44.623496 1020891 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:22:44.623520 1020891 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:22:44.623534 1020891 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:22:44.623558 1020891 start.go:360] acquireMachinesLock for old-k8s-version-443192: {Name:mk170647942fc2bf46e44d6cf36b5ae812935bb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:22:44.623618 1020891 start.go:364] duration metric: took 37.153µs to acquireMachinesLock for "old-k8s-version-443192"
	I1120 22:22:44.623643 1020891 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:22:44.623650 1020891 fix.go:54] fixHost starting: 
	I1120 22:22:44.624004 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:44.642352 1020891 fix.go:112] recreateIfNeeded on old-k8s-version-443192: state=Stopped err=<nil>
	W1120 22:22:44.642383 1020891 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 22:22:44.645586 1020891 out.go:252] * Restarting existing docker container for "old-k8s-version-443192" ...
	I1120 22:22:44.645674 1020891 cli_runner.go:164] Run: docker start old-k8s-version-443192
	I1120 22:22:44.930668 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:44.954024 1020891 kic.go:430] container "old-k8s-version-443192" state is running.
	I1120 22:22:44.954551 1020891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-443192
	I1120 22:22:44.980097 1020891 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/config.json ...
	I1120 22:22:44.980937 1020891 machine.go:94] provisionDockerMachine start ...
	I1120 22:22:44.981025 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:45.004082 1020891 main.go:143] libmachine: Using SSH client type: native
	I1120 22:22:45.004434 1020891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1120 22:22:45.004445 1020891 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:22:45.005269 1020891 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:22:48.154693 1020891 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-443192
	
	I1120 22:22:48.154718 1020891 ubuntu.go:182] provisioning hostname "old-k8s-version-443192"
	I1120 22:22:48.154789 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:48.173312 1020891 main.go:143] libmachine: Using SSH client type: native
	I1120 22:22:48.173740 1020891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1120 22:22:48.173759 1020891 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-443192 && echo "old-k8s-version-443192" | sudo tee /etc/hostname
	I1120 22:22:48.324379 1020891 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-443192
	
	I1120 22:22:48.324484 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:48.343041 1020891 main.go:143] libmachine: Using SSH client type: native
	I1120 22:22:48.343359 1020891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1120 22:22:48.343381 1020891 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-443192' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-443192/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-443192' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:22:48.487310 1020891 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:22:48.487348 1020891 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:22:48.487381 1020891 ubuntu.go:190] setting up certificates
	I1120 22:22:48.487392 1020891 provision.go:84] configureAuth start
	I1120 22:22:48.487454 1020891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-443192
	I1120 22:22:48.505335 1020891 provision.go:143] copyHostCerts
	I1120 22:22:48.505411 1020891 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:22:48.505426 1020891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:22:48.505503 1020891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:22:48.505610 1020891 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:22:48.505621 1020891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:22:48.505649 1020891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:22:48.505716 1020891 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:22:48.505724 1020891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:22:48.505751 1020891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:22:48.505813 1020891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-443192 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-443192]
	I1120 22:22:48.614219 1020891 provision.go:177] copyRemoteCerts
	I1120 22:22:48.614292 1020891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:22:48.614338 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:48.632020 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:48.735402 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:22:48.755534 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1120 22:22:48.775604 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 22:22:48.794602 1020891 provision.go:87] duration metric: took 307.185397ms to configureAuth
	I1120 22:22:48.794625 1020891 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:22:48.794814 1020891 config.go:182] Loaded profile config "old-k8s-version-443192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 22:22:48.794916 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:48.818932 1020891 main.go:143] libmachine: Using SSH client type: native
	I1120 22:22:48.819334 1020891 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1120 22:22:48.819403 1020891 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:22:49.184920 1020891 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:22:49.184953 1020891 machine.go:97] duration metric: took 4.204000561s to provisionDockerMachine
	I1120 22:22:49.184965 1020891 start.go:293] postStartSetup for "old-k8s-version-443192" (driver="docker")
	I1120 22:22:49.184975 1020891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:22:49.185035 1020891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:22:49.185088 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:49.204436 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:49.307151 1020891 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:22:49.310381 1020891 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:22:49.310414 1020891 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:22:49.310426 1020891 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:22:49.310481 1020891 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:22:49.310568 1020891 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:22:49.310686 1020891 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:22:49.318229 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:22:49.336998 1020891 start.go:296] duration metric: took 152.016752ms for postStartSetup
	I1120 22:22:49.337120 1020891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:22:49.337207 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:49.355474 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:49.452855 1020891 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:22:49.457974 1020891 fix.go:56] duration metric: took 4.834316045s for fixHost
	I1120 22:22:49.458001 1020891 start.go:83] releasing machines lock for "old-k8s-version-443192", held for 4.834370371s
	I1120 22:22:49.458082 1020891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-443192
	I1120 22:22:49.476110 1020891 ssh_runner.go:195] Run: cat /version.json
	I1120 22:22:49.476163 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:49.476163 1020891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:22:49.476225 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:49.495834 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:49.498712 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:49.598793 1020891 ssh_runner.go:195] Run: systemctl --version
	I1120 22:22:49.694222 1020891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:22:49.731899 1020891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:22:49.736318 1020891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:22:49.736466 1020891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:22:49.744936 1020891 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:22:49.744961 1020891 start.go:496] detecting cgroup driver to use...
	I1120 22:22:49.744993 1020891 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:22:49.745058 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:22:49.761323 1020891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:22:49.775082 1020891 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:22:49.775146 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:22:49.790489 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:22:49.804844 1020891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:22:49.945243 1020891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:22:50.077046 1020891 docker.go:234] disabling docker service ...
	I1120 22:22:50.077197 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:22:50.095088 1020891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:22:50.109764 1020891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:22:50.241604 1020891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:22:50.367805 1020891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:22:50.382123 1020891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:22:50.397431 1020891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1120 22:22:50.397492 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.408084 1020891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:22:50.408152 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.417650 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.427111 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.436837 1020891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:22:50.445353 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.455897 1020891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.464827 1020891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:22:50.474433 1020891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:22:50.484110 1020891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:22:50.493511 1020891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:22:50.613212 1020891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:22:50.786089 1020891 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:22:50.786171 1020891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:22:50.789981 1020891 start.go:564] Will wait 60s for crictl version
	I1120 22:22:50.790045 1020891 ssh_runner.go:195] Run: which crictl
	I1120 22:22:50.793540 1020891 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:22:50.825692 1020891 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:22:50.825843 1020891 ssh_runner.go:195] Run: crio --version
	I1120 22:22:50.865186 1020891 ssh_runner.go:195] Run: crio --version
	I1120 22:22:50.899338 1020891 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1120 22:22:50.902230 1020891 cli_runner.go:164] Run: docker network inspect old-k8s-version-443192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:22:50.918852 1020891 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:22:50.922888 1020891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:22:50.933738 1020891 kubeadm.go:884] updating cluster {Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:22:50.933862 1020891 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 22:22:50.933919 1020891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:22:50.969206 1020891 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:22:50.969234 1020891 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:22:50.969291 1020891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:22:50.999257 1020891 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:22:50.999281 1020891 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:22:50.999288 1020891 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1120 22:22:50.999389 1020891 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-443192 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:22:50.999468 1020891 ssh_runner.go:195] Run: crio config
	I1120 22:22:51.055672 1020891 cni.go:84] Creating CNI manager for ""
	I1120 22:22:51.055697 1020891 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:22:51.055715 1020891 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:22:51.055738 1020891 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-443192 NodeName:old-k8s-version-443192 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:22:51.055885 1020891 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-443192"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:22:51.055965 1020891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1120 22:22:51.064414 1020891 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:22:51.064575 1020891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:22:51.072703 1020891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1120 22:22:51.087245 1020891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:22:51.101671 1020891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1120 22:22:51.116016 1020891 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:22:51.120279 1020891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:22:51.131776 1020891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:22:51.255662 1020891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:22:51.271757 1020891 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192 for IP: 192.168.85.2
	I1120 22:22:51.271780 1020891 certs.go:195] generating shared ca certs ...
	I1120 22:22:51.271831 1020891 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:51.272006 1020891 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:22:51.272084 1020891 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:22:51.272098 1020891 certs.go:257] generating profile certs ...
	I1120 22:22:51.272233 1020891 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.key
	I1120 22:22:51.272329 1020891 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key.3493d06e
	I1120 22:22:51.272396 1020891 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.key
	I1120 22:22:51.272542 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:22:51.272594 1020891 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:22:51.272609 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:22:51.272637 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:22:51.272690 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:22:51.272726 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:22:51.272824 1020891 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:22:51.273510 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:22:51.297325 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:22:51.317556 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:22:51.338774 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:22:51.360333 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 22:22:51.380972 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 22:22:51.403738 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:22:51.432212 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:22:51.460422 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:22:51.479406 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:22:51.498296 1020891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:22:51.518273 1020891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:22:51.531969 1020891 ssh_runner.go:195] Run: openssl version
	I1120 22:22:51.538355 1020891 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:22:51.546627 1020891 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:22:51.555245 1020891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:22:51.559566 1020891 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:22:51.559678 1020891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:22:51.601914 1020891 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:22:51.610292 1020891 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:22:51.618046 1020891 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:22:51.629840 1020891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:22:51.633800 1020891 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:22:51.633873 1020891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:22:51.676429 1020891 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:22:51.683679 1020891 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:22:51.690876 1020891 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:22:51.698397 1020891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:22:51.702244 1020891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:22:51.702308 1020891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:22:51.743160 1020891 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:22:51.750658 1020891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:22:51.754416 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:22:51.795375 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:22:51.837468 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:22:51.878566 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:22:51.942508 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:22:52.007936 1020891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:22:52.066374 1020891 kubeadm.go:401] StartCluster: {Name:old-k8s-version-443192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-443192 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:22:52.066519 1020891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:22:52.066616 1020891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:22:52.149758 1020891 cri.go:89] found id: "08baca71437157118a7d970659bacffc613ba230c7a81cfca8a55f5bef63bb1d"
	I1120 22:22:52.149829 1020891 cri.go:89] found id: "d30d232b1913bbcbf830559cf3873ada098fe3c7afcd389ba988f881f71008b4"
	I1120 22:22:52.149847 1020891 cri.go:89] found id: "9dcca088872de456ae574afdbd29f48077afe4c8f371c0f6fa7c77bceae2bfc9"
	I1120 22:22:52.149867 1020891 cri.go:89] found id: "0eb106aae6e3d943cefbdd723b0bbb278166cfebbd506495a02bbd34185a3502"
	I1120 22:22:52.149903 1020891 cri.go:89] found id: ""
	I1120 22:22:52.149974 1020891 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:22:52.170210 1020891 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:22:52Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:22:52.170362 1020891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:22:52.186140 1020891 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:22:52.186207 1020891 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:22:52.186297 1020891 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:22:52.201800 1020891 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:22:52.202456 1020891 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-443192" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:22:52.202760 1020891 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-443192" cluster setting kubeconfig missing "old-k8s-version-443192" context setting]
	I1120 22:22:52.203352 1020891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:52.204938 1020891 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:22:52.218608 1020891 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1120 22:22:52.218700 1020891 kubeadm.go:602] duration metric: took 32.473009ms to restartPrimaryControlPlane
	I1120 22:22:52.218726 1020891 kubeadm.go:403] duration metric: took 152.363341ms to StartCluster
	I1120 22:22:52.218766 1020891 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:52.218847 1020891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:22:52.219956 1020891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:22:52.220238 1020891 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:22:52.220625 1020891 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:22:52.220708 1020891 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-443192"
	I1120 22:22:52.220718 1020891 addons.go:70] Setting dashboard=true in profile "old-k8s-version-443192"
	I1120 22:22:52.220734 1020891 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-443192"
	W1120 22:22:52.220741 1020891 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:22:52.220765 1020891 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:22:52.220796 1020891 addons.go:239] Setting addon dashboard=true in "old-k8s-version-443192"
	W1120 22:22:52.220807 1020891 addons.go:248] addon dashboard should already be in state true
	I1120 22:22:52.220834 1020891 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:22:52.221241 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:52.221297 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:52.224441 1020891 config.go:182] Loaded profile config "old-k8s-version-443192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 22:22:52.224535 1020891 out.go:179] * Verifying Kubernetes components...
	I1120 22:22:52.224783 1020891 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-443192"
	I1120 22:22:52.224823 1020891 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-443192"
	I1120 22:22:52.225162 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:52.230452 1020891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:22:52.278473 1020891 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-443192"
	W1120 22:22:52.278495 1020891 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:22:52.278520 1020891 host.go:66] Checking if "old-k8s-version-443192" exists ...
	I1120 22:22:52.278937 1020891 cli_runner.go:164] Run: docker container inspect old-k8s-version-443192 --format={{.State.Status}}
	I1120 22:22:52.282768 1020891 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:22:52.282884 1020891 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:22:52.289307 1020891 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:22:52.289409 1020891 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:22:52.289419 1020891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:22:52.289481 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:52.293261 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:22:52.293286 1020891 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:22:52.293353 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:52.329110 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:52.335951 1020891 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:22:52.335972 1020891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:22:52.336040 1020891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-443192
	I1120 22:22:52.362619 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:52.376121 1020891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/old-k8s-version-443192/id_rsa Username:docker}
	I1120 22:22:52.594878 1020891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:22:52.623471 1020891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:22:52.644224 1020891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:22:52.645723 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:22:52.645744 1020891 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:22:52.652237 1020891 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-443192" to be "Ready" ...
	I1120 22:22:52.696979 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:22:52.697015 1020891 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:22:52.810223 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:22:52.810296 1020891 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:22:52.883878 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:22:52.883943 1020891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:22:52.932774 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:22:52.932863 1020891 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:22:52.962601 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:22:52.962684 1020891 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:22:52.987954 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:22:52.988027 1020891 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:22:53.020858 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:22:53.020933 1020891 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:22:53.045591 1020891 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:22:53.045665 1020891 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:22:53.068309 1020891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:22:57.090712 1020891 node_ready.go:49] node "old-k8s-version-443192" is "Ready"
	I1120 22:22:57.090789 1020891 node_ready.go:38] duration metric: took 4.438507258s for node "old-k8s-version-443192" to be "Ready" ...
	I1120 22:22:57.090818 1020891 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:22:57.090907 1020891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:22:58.759551 1020891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.136041596s)
	I1120 22:22:58.759603 1020891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.115356194s)
	I1120 22:22:59.310058 1020891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.241655313s)
	I1120 22:22:59.310089 1020891 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.219121938s)
	I1120 22:22:59.310284 1020891 api_server.go:72] duration metric: took 7.089991015s to wait for apiserver process to appear ...
	I1120 22:22:59.310293 1020891 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:22:59.310314 1020891 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:22:59.313474 1020891 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-443192 addons enable metrics-server
	
	I1120 22:22:59.316001 1020891 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1120 22:22:59.319843 1020891 addons.go:515] duration metric: took 7.099197575s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1120 22:22:59.324682 1020891 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:22:59.326283 1020891 api_server.go:141] control plane version: v1.28.0
	I1120 22:22:59.326355 1020891 api_server.go:131] duration metric: took 16.040888ms to wait for apiserver health ...
	I1120 22:22:59.326380 1020891 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:22:59.331734 1020891 system_pods.go:59] 8 kube-system pods found
	I1120 22:22:59.331822 1020891 system_pods.go:61] "coredns-5dd5756b68-q7jgh" [b00478d4-df59-4e3b-9e06-d6dc59c4430f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:22:59.331861 1020891 system_pods.go:61] "etcd-old-k8s-version-443192" [c30065df-9ec7-453e-b779-96af2c2f8730] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:22:59.331887 1020891 system_pods.go:61] "kindnet-ch2km" [960a21f2-f0bc-4d3e-a058-91b7d45a0d7b] Running
	I1120 22:22:59.331915 1020891 system_pods.go:61] "kube-apiserver-old-k8s-version-443192" [b64a6e1f-7c43-4917-95a9-923853091074] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:22:59.331951 1020891 system_pods.go:61] "kube-controller-manager-old-k8s-version-443192" [4ba54de8-17f5-4a0d-b5a3-a8d0c8c5931a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:22:59.331976 1020891 system_pods.go:61] "kube-proxy-srvjx" [46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0] Running
	I1120 22:22:59.332000 1020891 system_pods.go:61] "kube-scheduler-old-k8s-version-443192" [945b7ba2-b725-420b-b25e-eddc4e56bb75] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:22:59.332044 1020891 system_pods.go:61] "storage-provisioner" [8f6e35f9-c59f-4a38-b658-c7acf5d0df1b] Running
	I1120 22:22:59.332066 1020891 system_pods.go:74] duration metric: took 5.66614ms to wait for pod list to return data ...
	I1120 22:22:59.332102 1020891 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:22:59.334932 1020891 default_sa.go:45] found service account: "default"
	I1120 22:22:59.335038 1020891 default_sa.go:55] duration metric: took 2.910659ms for default service account to be created ...
	I1120 22:22:59.335065 1020891 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:22:59.338783 1020891 system_pods.go:86] 8 kube-system pods found
	I1120 22:22:59.338818 1020891 system_pods.go:89] "coredns-5dd5756b68-q7jgh" [b00478d4-df59-4e3b-9e06-d6dc59c4430f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:22:59.338827 1020891 system_pods.go:89] "etcd-old-k8s-version-443192" [c30065df-9ec7-453e-b779-96af2c2f8730] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:22:59.338834 1020891 system_pods.go:89] "kindnet-ch2km" [960a21f2-f0bc-4d3e-a058-91b7d45a0d7b] Running
	I1120 22:22:59.338841 1020891 system_pods.go:89] "kube-apiserver-old-k8s-version-443192" [b64a6e1f-7c43-4917-95a9-923853091074] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:22:59.338847 1020891 system_pods.go:89] "kube-controller-manager-old-k8s-version-443192" [4ba54de8-17f5-4a0d-b5a3-a8d0c8c5931a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:22:59.338853 1020891 system_pods.go:89] "kube-proxy-srvjx" [46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0] Running
	I1120 22:22:59.338861 1020891 system_pods.go:89] "kube-scheduler-old-k8s-version-443192" [945b7ba2-b725-420b-b25e-eddc4e56bb75] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:22:59.338869 1020891 system_pods.go:89] "storage-provisioner" [8f6e35f9-c59f-4a38-b658-c7acf5d0df1b] Running
	I1120 22:22:59.338878 1020891 system_pods.go:126] duration metric: took 3.792583ms to wait for k8s-apps to be running ...
	I1120 22:22:59.338891 1020891 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:22:59.338951 1020891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:22:59.352449 1020891 system_svc.go:56] duration metric: took 13.548245ms WaitForService to wait for kubelet
	I1120 22:22:59.352478 1020891 kubeadm.go:587] duration metric: took 7.132186282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:22:59.352498 1020891 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:22:59.355702 1020891 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:22:59.355738 1020891 node_conditions.go:123] node cpu capacity is 2
	I1120 22:22:59.355752 1020891 node_conditions.go:105] duration metric: took 3.248715ms to run NodePressure ...
	I1120 22:22:59.355769 1020891 start.go:242] waiting for startup goroutines ...
	I1120 22:22:59.355780 1020891 start.go:247] waiting for cluster config update ...
	I1120 22:22:59.355791 1020891 start.go:256] writing updated cluster config ...
	I1120 22:22:59.356089 1020891 ssh_runner.go:195] Run: rm -f paused
	I1120 22:22:59.359745 1020891 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:22:59.364009 1020891 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-q7jgh" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:23:01.369854 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:03.370186 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:05.370616 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:07.870311 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:09.871231 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:12.371111 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:14.870465 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:16.870794 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:18.871449 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:21.370931 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:23.869596 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:25.870422 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	W1120 22:23:28.371679 1020891 pod_ready.go:104] pod "coredns-5dd5756b68-q7jgh" is not "Ready", error: <nil>
	I1120 22:23:29.870375 1020891 pod_ready.go:94] pod "coredns-5dd5756b68-q7jgh" is "Ready"
	I1120 22:23:29.870407 1020891 pod_ready.go:86] duration metric: took 30.506370796s for pod "coredns-5dd5756b68-q7jgh" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.873549 1020891 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.878857 1020891 pod_ready.go:94] pod "etcd-old-k8s-version-443192" is "Ready"
	I1120 22:23:29.878888 1020891 pod_ready.go:86] duration metric: took 5.310535ms for pod "etcd-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.882038 1020891 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.887289 1020891 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-443192" is "Ready"
	I1120 22:23:29.887317 1020891 pod_ready.go:86] duration metric: took 5.251596ms for pod "kube-apiserver-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:29.890500 1020891 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:30.077385 1020891 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-443192" is "Ready"
	I1120 22:23:30.077420 1020891 pod_ready.go:86] duration metric: took 186.892047ms for pod "kube-controller-manager-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:30.269696 1020891 pod_ready.go:83] waiting for pod "kube-proxy-srvjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:30.668239 1020891 pod_ready.go:94] pod "kube-proxy-srvjx" is "Ready"
	I1120 22:23:30.668268 1020891 pod_ready.go:86] duration metric: took 398.54114ms for pod "kube-proxy-srvjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:30.869073 1020891 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:31.268847 1020891 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-443192" is "Ready"
	I1120 22:23:31.268882 1020891 pod_ready.go:86] duration metric: took 399.781016ms for pod "kube-scheduler-old-k8s-version-443192" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:23:31.268895 1020891 pod_ready.go:40] duration metric: took 31.909119901s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:23:31.324704 1020891 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1120 22:23:31.327899 1020891 out.go:203] 
	W1120 22:23:31.330744 1020891 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1120 22:23:31.333608 1020891 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1120 22:23:31.336541 1020891 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-443192" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 22:23:21 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:21.678343595Z" level=info msg="Removed container 72a44c3aadfc4214d966af9022d93aab58fd3e084fdf5958a2b85c0021619366: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs/dashboard-metrics-scraper" id=69e3917e-6d1d-4262-bda9-45e30cc16b97 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:23:28 old-k8s-version-443192 conmon[1145]: conmon 9985fcead7c1c65a99bb <ninfo>: container 1155 exited with status 1
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.669981681Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b2133167-f8c0-4bf4-8a25-3f35542f4d16 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.671757843Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b2541dfa-1681-4263-9772-f3c8e044386d name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.672626071Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b936a84e-bd34-4316-b421-30b7fb3fa0c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.672769942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.677936146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.678209938Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5d473324a82226ce26c6c79378e70d03463ec005f439f70eb712349054c3724e/merged/etc/passwd: no such file or directory"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.678235588Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5d473324a82226ce26c6c79378e70d03463ec005f439f70eb712349054c3724e/merged/etc/group: no such file or directory"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.678473343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.704473727Z" level=info msg="Created container a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36: kube-system/storage-provisioner/storage-provisioner" id=b936a84e-bd34-4316-b421-30b7fb3fa0c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.705785538Z" level=info msg="Starting container: a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36" id=4ab45e5c-34a6-4982-a1da-a199ef849db7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:23:28 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:28.708212072Z" level=info msg="Started container" PID=1633 containerID=a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36 description=kube-system/storage-provisioner/storage-provisioner id=4ab45e5c-34a6-4982-a1da-a199ef849db7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ffd0babd15674634c8caa2e125565a77ff2b5f6393b27217e4f983ae5a7be78a
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.418151848Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.42452985Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.424566642Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.424592267Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.427928343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.4279615Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.427977771Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.431591972Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.431628469Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.431653339Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.434887932Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:23:38 old-k8s-version-443192 crio[653]: time="2025-11-20T22:23:38.434927908Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a188c4e4fdda0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   ffd0babd15674       storage-provisioner                              kube-system
	9741b34fa9e85       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   1                   1ac7a300950d4       dashboard-metrics-scraper-5f989dc9cf-pppjs       kubernetes-dashboard
	09cd542a1b678       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   1b57c879c91c6       kubernetes-dashboard-8694d4445c-pvh8p            kubernetes-dashboard
	68468b5e3bffb       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           50 seconds ago      Running             coredns                     1                   ec2dadf3a1066       coredns-5dd5756b68-q7jgh                         kube-system
	babee24b5f037       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   3c3780f139d5f       kindnet-ch2km                                    kube-system
	0177ef5bb9c7f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   c1edbfba94e55       busybox                                          default
	1576991fff11e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           50 seconds ago      Running             kube-proxy                  1                   dbb62011da372       kube-proxy-srvjx                                 kube-system
	9985fcead7c1c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   ffd0babd15674       storage-provisioner                              kube-system
	08baca7143715       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           56 seconds ago      Running             kube-apiserver              1                   27ca5159eb488       kube-apiserver-old-k8s-version-443192            kube-system
	d30d232b1913b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           56 seconds ago      Running             kube-controller-manager     1                   081357e4bc2fd       kube-controller-manager-old-k8s-version-443192   kube-system
	9dcca088872de       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           56 seconds ago      Running             etcd                        1                   69a3573648b0b       etcd-old-k8s-version-443192                      kube-system
	0eb106aae6e3d       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           56 seconds ago      Running             kube-scheduler              1                   b17e087980e9d       kube-scheduler-old-k8s-version-443192            kube-system
	
	
	==> coredns [68468b5e3bffbe45e05a07c014a98788897a7948c744fc6aa4b3b47a96e34963] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59211 - 64146 "HINFO IN 2822090401044257068.8793553617202448944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024308547s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-443192
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-443192
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-443192
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_21_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-443192
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:23:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:23:27 +0000   Thu, 20 Nov 2025 22:21:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:23:27 +0000   Thu, 20 Nov 2025 22:21:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:23:27 +0000   Thu, 20 Nov 2025 22:21:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:23:27 +0000   Thu, 20 Nov 2025 22:22:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-443192
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                25366f85-c45a-4699-899a-6aa1d4483da7
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-q7jgh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-old-k8s-version-443192                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-ch2km                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-443192             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-443192    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-srvjx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-443192             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-pppjs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pvh8p             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-443192 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node old-k8s-version-443192 event: Registered Node old-k8s-version-443192 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-443192 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-443192 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-443192 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node old-k8s-version-443192 event: Registered Node old-k8s-version-443192 in Controller
	
	
	==> dmesg <==
	[Nov20 21:54] overlayfs: idmapped layers are currently not supported
	[Nov20 21:59] overlayfs: idmapped layers are currently not supported
	[Nov20 22:00] overlayfs: idmapped layers are currently not supported
	[Nov20 22:01] overlayfs: idmapped layers are currently not supported
	[Nov20 22:02] overlayfs: idmapped layers are currently not supported
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9dcca088872de456ae574afdbd29f48077afe4c8f371c0f6fa7c77bceae2bfc9] <==
	{"level":"info","ts":"2025-11-20T22:22:52.297096Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T22:22:52.297149Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T22:22:52.297484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-20T22:22:52.297603Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-20T22:22:52.297969Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T22:22:52.298077Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T22:22:52.373946Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T22:22:52.374062Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T22:22:52.374072Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T22:22:52.387228Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T22:22:52.387293Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T22:22:54.11643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-20T22:22:54.116539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-20T22:22:54.116588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T22:22:54.116627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-20T22:22:54.11666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-20T22:22:54.116704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-20T22:22:54.116737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-20T22:22:54.119564Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-443192 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T22:22:54.119658Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T22:22:54.121221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T22:22:54.121492Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T22:22:54.123549Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-20T22:22:54.148844Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T22:22:54.148955Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:23:48 up  5:05,  0 user,  load average: 1.81, 3.00, 2.47
	Linux old-k8s-version-443192 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [babee24b5f037d1430cee3e96ac245ea580fe5c334e85189c98eda6e2c23ee2f] <==
	I1120 22:22:58.219965       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:22:58.220183       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:22:58.220317       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:22:58.220329       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:22:58.220341       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:22:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:22:58.411994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:22:58.412014       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:22:58.412022       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:22:58.412303       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:23:28.411623       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:23:28.412569       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:23:28.412636       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:23:28.414918       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 22:23:29.712919       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:23:29.712949       1 metrics.go:72] Registering metrics
	I1120 22:23:29.713019       1 controller.go:711] "Syncing nftables rules"
	I1120 22:23:38.417800       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:23:38.417858       1 main.go:301] handling current node
	I1120 22:23:48.420863       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:23:48.420925       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08baca71437157118a7d970659bacffc613ba230c7a81cfca8a55f5bef63bb1d] <==
	I1120 22:22:57.069461       1 shared_informer.go:318] Caches are synced for configmaps
	I1120 22:22:57.071899       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 22:22:57.078353       1 aggregator.go:166] initial CRD sync complete...
	I1120 22:22:57.078474       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 22:22:57.078604       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:22:57.078659       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:22:57.088705       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1120 22:22:57.091943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1120 22:22:57.092867       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1120 22:22:57.092887       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1120 22:22:57.092974       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 22:22:57.116549       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:22:57.147433       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1120 22:22:57.246616       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 22:22:57.932120       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:22:59.084792       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 22:22:59.128993       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 22:22:59.163597       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:22:59.177606       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:22:59.190854       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 22:22:59.258018       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.97.186"}
	I1120 22:22:59.301984       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.195.65"}
	I1120 22:23:10.116583       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:23:10.130871       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1120 22:23:10.213252       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d30d232b1913bbcbf830559cf3873ada098fe3c7afcd389ba988f881f71008b4] <==
	I1120 22:23:10.211746       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 22:23:10.219241       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1120 22:23:10.220994       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-pppjs"
	I1120 22:23:10.221104       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-pvh8p"
	I1120 22:23:10.230769       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 22:23:10.243702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.097794ms"
	I1120 22:23:10.244031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="92.166356ms"
	I1120 22:23:10.252749       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1120 22:23:10.255738       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1120 22:23:10.263320       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.918661ms"
	I1120 22:23:10.263476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.606µs"
	I1120 22:23:10.263553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.455712ms"
	I1120 22:23:10.263960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.84µs"
	I1120 22:23:10.272213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="92.441µs"
	I1120 22:23:10.286715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.373µs"
	I1120 22:23:10.555105       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 22:23:10.555134       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 22:23:10.571274       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 22:23:15.649652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.914879ms"
	I1120 22:23:15.650234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.371µs"
	I1120 22:23:20.664681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.375µs"
	I1120 22:23:21.670547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.507µs"
	I1120 22:23:22.670055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="91.857µs"
	I1120 22:23:29.553569       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.917005ms"
	I1120 22:23:29.553770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.939µs"
	
	
	==> kube-proxy [1576991fff11eb3845a8a4cb002efe82a207403e30e19f8f6299ed0c313b4ac8] <==
	I1120 22:22:58.349325       1 server_others.go:69] "Using iptables proxy"
	I1120 22:22:58.395430       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1120 22:22:58.578767       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:22:58.581198       1 server_others.go:152] "Using iptables Proxier"
	I1120 22:22:58.581236       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 22:22:58.581245       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 22:22:58.581277       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 22:22:58.581486       1 server.go:846] "Version info" version="v1.28.0"
	I1120 22:22:58.581495       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:22:58.589674       1 config.go:188] "Starting service config controller"
	I1120 22:22:58.589700       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 22:22:58.589725       1 config.go:97] "Starting endpoint slice config controller"
	I1120 22:22:58.589729       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 22:22:58.590198       1 config.go:315] "Starting node config controller"
	I1120 22:22:58.590205       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 22:22:58.690756       1 shared_informer.go:318] Caches are synced for node config
	I1120 22:22:58.690785       1 shared_informer.go:318] Caches are synced for service config
	I1120 22:22:58.690810       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0eb106aae6e3d943cefbdd723b0bbb278166cfebbd506495a02bbd34185a3502] <==
	I1120 22:22:54.051573       1 serving.go:348] Generated self-signed cert in-memory
	W1120 22:22:56.921197       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 22:22:56.921311       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 22:22:56.921345       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 22:22:56.921385       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 22:22:57.125598       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1120 22:22:57.125697       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:22:57.132284       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:22:57.132341       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1120 22:22:57.134889       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1120 22:22:57.135240       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1120 22:22:57.233576       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 22:22:57 old-k8s-version-443192 kubelet[781]: I1120 22:22:57.494007     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0-xtables-lock\") pod \"kube-proxy-srvjx\" (UID: \"46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0\") " pod="kube-system/kube-proxy-srvjx"
	Nov 20 22:22:57 old-k8s-version-443192 kubelet[781]: I1120 22:22:57.494146     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0-lib-modules\") pod \"kube-proxy-srvjx\" (UID: \"46c2463c-bf7a-44ed-ad38-2fd23a4ccfb0\") " pod="kube-system/kube-proxy-srvjx"
	Nov 20 22:22:57 old-k8s-version-443192 kubelet[781]: W1120 22:22:57.843556     781 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/crio-3c3780f139d5f9859c91bb1a6e44edaad8e5e00b10286888219e6678a6aad19b WatchSource:0}: Error finding container 3c3780f139d5f9859c91bb1a6e44edaad8e5e00b10286888219e6678a6aad19b: Status 404 returned error can't find the container with id 3c3780f139d5f9859c91bb1a6e44edaad8e5e00b10286888219e6678a6aad19b
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.237271     781 topology_manager.go:215] "Topology Admit Handler" podUID="b209a8ed-1146-4cc3-b47f-f5481b75bb98" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-pppjs"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.238497     781 topology_manager.go:215] "Topology Admit Handler" podUID="c6b6317d-6005-4477-ae37-06c8f92438a3" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-pvh8p"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.395568     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b209a8ed-1146-4cc3-b47f-f5481b75bb98-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-pppjs\" (UID: \"b209a8ed-1146-4cc3-b47f-f5481b75bb98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.395640     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrxvz\" (UniqueName: \"kubernetes.io/projected/c6b6317d-6005-4477-ae37-06c8f92438a3-kube-api-access-zrxvz\") pod \"kubernetes-dashboard-8694d4445c-pvh8p\" (UID: \"c6b6317d-6005-4477-ae37-06c8f92438a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvh8p"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.395671     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbw5k\" (UniqueName: \"kubernetes.io/projected/b209a8ed-1146-4cc3-b47f-f5481b75bb98-kube-api-access-rbw5k\") pod \"dashboard-metrics-scraper-5f989dc9cf-pppjs\" (UID: \"b209a8ed-1146-4cc3-b47f-f5481b75bb98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: I1120 22:23:10.395697     781 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c6b6317d-6005-4477-ae37-06c8f92438a3-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-pvh8p\" (UID: \"c6b6317d-6005-4477-ae37-06c8f92438a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvh8p"
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: W1120 22:23:10.562697     781 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/crio-1b57c879c91c691e980a22768b1eea601b538bd4e32143cb9db638028c01c1f7 WatchSource:0}: Error finding container 1b57c879c91c691e980a22768b1eea601b538bd4e32143cb9db638028c01c1f7: Status 404 returned error can't find the container with id 1b57c879c91c691e980a22768b1eea601b538bd4e32143cb9db638028c01c1f7
	Nov 20 22:23:10 old-k8s-version-443192 kubelet[781]: W1120 22:23:10.594210     781 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/947acc53b1a2882e20f276bfe6921cd40ed865b7766751770eb8625560da9753/crio-1ac7a300950d4669d53d6178727ceb74af581736cb14bffd5d794b5be7b7e2ac WatchSource:0}: Error finding container 1ac7a300950d4669d53d6178727ceb74af581736cb14bffd5d794b5be7b7e2ac: Status 404 returned error can't find the container with id 1ac7a300950d4669d53d6178727ceb74af581736cb14bffd5d794b5be7b7e2ac
	Nov 20 22:23:20 old-k8s-version-443192 kubelet[781]: I1120 22:23:20.647580     781 scope.go:117] "RemoveContainer" containerID="72a44c3aadfc4214d966af9022d93aab58fd3e084fdf5958a2b85c0021619366"
	Nov 20 22:23:20 old-k8s-version-443192 kubelet[781]: I1120 22:23:20.664477     781 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvh8p" podStartSLOduration=5.949281957 podCreationTimestamp="2025-11-20 22:23:10 +0000 UTC" firstStartedPulling="2025-11-20 22:23:10.567847107 +0000 UTC m=+19.293252846" lastFinishedPulling="2025-11-20 22:23:15.28297052 +0000 UTC m=+24.008376276" observedRunningTime="2025-11-20 22:23:15.627014376 +0000 UTC m=+24.352420149" watchObservedRunningTime="2025-11-20 22:23:20.664405387 +0000 UTC m=+29.389811184"
	Nov 20 22:23:21 old-k8s-version-443192 kubelet[781]: I1120 22:23:21.651448     781 scope.go:117] "RemoveContainer" containerID="9741b34fa9e85d148668cddd6abf917c4a6913a3797e2d161bad72d3fe8eb477"
	Nov 20 22:23:21 old-k8s-version-443192 kubelet[781]: I1120 22:23:21.651965     781 scope.go:117] "RemoveContainer" containerID="72a44c3aadfc4214d966af9022d93aab58fd3e084fdf5958a2b85c0021619366"
	Nov 20 22:23:21 old-k8s-version-443192 kubelet[781]: E1120 22:23:21.662881     781 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pppjs_kubernetes-dashboard(b209a8ed-1146-4cc3-b47f-f5481b75bb98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs" podUID="b209a8ed-1146-4cc3-b47f-f5481b75bb98"
	Nov 20 22:23:22 old-k8s-version-443192 kubelet[781]: I1120 22:23:22.654288     781 scope.go:117] "RemoveContainer" containerID="9741b34fa9e85d148668cddd6abf917c4a6913a3797e2d161bad72d3fe8eb477"
	Nov 20 22:23:22 old-k8s-version-443192 kubelet[781]: E1120 22:23:22.654573     781 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pppjs_kubernetes-dashboard(b209a8ed-1146-4cc3-b47f-f5481b75bb98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs" podUID="b209a8ed-1146-4cc3-b47f-f5481b75bb98"
	Nov 20 22:23:28 old-k8s-version-443192 kubelet[781]: I1120 22:23:28.669296     781 scope.go:117] "RemoveContainer" containerID="9985fcead7c1c65a99bb4a4836cdf63884e4e8a07114be23b3c00a042c12d29e"
	Nov 20 22:23:30 old-k8s-version-443192 kubelet[781]: I1120 22:23:30.540773     781 scope.go:117] "RemoveContainer" containerID="9741b34fa9e85d148668cddd6abf917c4a6913a3797e2d161bad72d3fe8eb477"
	Nov 20 22:23:30 old-k8s-version-443192 kubelet[781]: E1120 22:23:30.541095     781 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pppjs_kubernetes-dashboard(b209a8ed-1146-4cc3-b47f-f5481b75bb98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pppjs" podUID="b209a8ed-1146-4cc3-b47f-f5481b75bb98"
	Nov 20 22:23:43 old-k8s-version-443192 kubelet[781]: I1120 22:23:43.631432     781 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 20 22:23:43 old-k8s-version-443192 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:23:43 old-k8s-version-443192 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:23:43 old-k8s-version-443192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [09cd542a1b6789791215f9991090113aa07db1c1dd6155ecc1a82452ba0a9b66] <==
	2025/11/20 22:23:15 Starting overwatch
	2025/11/20 22:23:15 Using namespace: kubernetes-dashboard
	2025/11/20 22:23:15 Using in-cluster config to connect to apiserver
	2025/11/20 22:23:15 Using secret token for csrf signing
	2025/11/20 22:23:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 22:23:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 22:23:15 Successful initial request to the apiserver, version: v1.28.0
	2025/11/20 22:23:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 22:23:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 22:23:15 Generating JWE encryption key
	2025/11/20 22:23:16 Initializing JWE encryption key from synchronized object
	2025/11/20 22:23:16 Creating in-cluster Sidecar client
	2025/11/20 22:23:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:23:16 Serving insecurely on HTTP port: 9090
	2025/11/20 22:23:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9985fcead7c1c65a99bb4a4836cdf63884e4e8a07114be23b3c00a042c12d29e] <==
	I1120 22:22:58.307325       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 22:23:28.309164       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a188c4e4fdda0293a7adc67fa7fd0169fc8879684bf256d988451f296dfe1e36] <==
	I1120 22:23:28.724604       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 22:23:28.738722       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:23:28.738864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 22:23:46.140861       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:23:46.144891       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc0d96b6-2eea-47ae-a652-17e46e27b3bc", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-443192_251ca71c-5de3-4e85-aba3-fcdf60277c2c became leader
	I1120 22:23:46.145052       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-443192_251ca71c-5de3-4e85-aba3-fcdf60277c2c!
	I1120 22:23:46.245806       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-443192_251ca71c-5de3-4e85-aba3-fcdf60277c2c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-443192 -n old-k8s-version-443192
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-443192 -n old-k8s-version-443192: exit status 2 (372.796882ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-443192 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (282.086073ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:25:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-559701 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-559701 describe deploy/metrics-server -n kube-system: exit status 1 (87.435906ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-559701 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-559701
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-559701:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225",
	        "Created": "2025-11-20T22:23:58.497614948Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1025085,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:23:58.57295968Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/hostname",
	        "HostsPath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/hosts",
	        "LogPath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225-json.log",
	        "Name": "/default-k8s-diff-port-559701",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-559701:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-559701",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225",
	                "LowerDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-559701",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-559701/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-559701",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-559701",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-559701",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "611f44e9653ec53ef000b79d0d2aa99ec81043f09e44d62c4c2ff9ee45cca446",
	            "SandboxKey": "/var/run/docker/netns/611f44e9653e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34169"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34170"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-559701": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:94:3c:f0:ee:5b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f87df3640a96e74282a6fa8d1f119c94634bd199cb6db600d19a35606adfa81c",
	                    "EndpointID": "d49c3c88cc6a10616969260afe6ff98038f482130b21cc1103575a0a7b57dead",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-559701",
	                        "dec634595af0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-559701 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-559701 logs -n 25: (1.265910355s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-640880 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-640880                │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ ssh     │ -p cilium-640880 sudo crio config                                                                                                                                                                                                             │ cilium-640880                │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │                     │
	│ delete  │ -p cilium-640880                                                                                                                                                                                                                              │ cilium-640880                │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │ 20 Nov 25 22:19 UTC │
	│ start   │ -p force-systemd-env-833370 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-833370     │ jenkins │ v1.37.0 │ 20 Nov 25 22:19 UTC │ 20 Nov 25 22:20 UTC │
	│ delete  │ -p kubernetes-upgrade-410652                                                                                                                                                                                                                  │ kubernetes-upgrade-410652    │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ delete  │ -p force-systemd-env-833370                                                                                                                                                                                                                   │ force-systemd-env-833370     │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-options-961311 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ cert-options-961311 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ -p cert-options-961311 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ delete  │ -p cert-options-961311                                                                                                                                                                                                                        │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │                     │
	│ stop    │ -p old-k8s-version-443192 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-443192 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:23 UTC │
	│ image   │ old-k8s-version-443192 image list --format=json                                                                                                                                                                                               │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ pause   │ -p old-k8s-version-443192 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │                     │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:24 UTC │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:25 UTC │
	│ delete  │ -p cert-expiration-420078                                                                                                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:24 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:24:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:24:27.107557 1027933 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:24:27.107714 1027933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:24:27.107721 1027933 out.go:374] Setting ErrFile to fd 2...
	I1120 22:24:27.107727 1027933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:24:27.107988 1027933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:24:27.108403 1027933 out.go:368] Setting JSON to false
	I1120 22:24:27.109387 1027933 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18393,"bootTime":1763659075,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:24:27.109462 1027933 start.go:143] virtualization:  
	I1120 22:24:27.113679 1027933 out.go:179] * [embed-certs-270206] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:24:27.116916 1027933 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:24:27.117001 1027933 notify.go:221] Checking for updates...
	I1120 22:24:27.123276 1027933 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:24:27.126744 1027933 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:24:27.129702 1027933 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:24:27.132751 1027933 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:24:27.135677 1027933 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:24:27.139151 1027933 config.go:182] Loaded profile config "default-k8s-diff-port-559701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:24:27.139272 1027933 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:24:27.194286 1027933 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:24:27.194412 1027933 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:24:27.320350 1027933 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:24:27.302138313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:24:27.320463 1027933 docker.go:319] overlay module found
	I1120 22:24:27.323720 1027933 out.go:179] * Using the docker driver based on user configuration
	I1120 22:24:27.326496 1027933 start.go:309] selected driver: docker
	I1120 22:24:27.326511 1027933 start.go:930] validating driver "docker" against <nil>
	I1120 22:24:27.326524 1027933 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:24:27.327296 1027933 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:24:27.447322 1027933 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:24:27.429542242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:24:27.447471 1027933 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 22:24:27.447733 1027933 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:24:27.450657 1027933 out.go:179] * Using Docker driver with root privileges
	I1120 22:24:27.453454 1027933 cni.go:84] Creating CNI manager for ""
	I1120 22:24:27.453524 1027933 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:24:27.453535 1027933 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 22:24:27.453616 1027933 start.go:353] cluster config:
	{Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:24:27.457154 1027933 out.go:179] * Starting "embed-certs-270206" primary control-plane node in "embed-certs-270206" cluster
	I1120 22:24:27.459960 1027933 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:24:27.462876 1027933 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:24:27.465699 1027933 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:24:27.465751 1027933 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:24:27.465761 1027933 cache.go:65] Caching tarball of preloaded images
	I1120 22:24:27.465770 1027933 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:24:27.465851 1027933 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:24:27.465861 1027933 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:24:27.466018 1027933 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/config.json ...
	I1120 22:24:27.466035 1027933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/config.json: {Name:mkedc9e981a26afca06896593ad0292c122b4009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:27.490295 1027933 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:24:27.490321 1027933 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:24:27.490338 1027933 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:24:27.490360 1027933 start.go:360] acquireMachinesLock for embed-certs-270206: {Name:mk80d30c009178e97eae54d0fb9c0edcaf285b3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:24:27.490495 1027933 start.go:364] duration metric: took 114.004µs to acquireMachinesLock for "embed-certs-270206"
	I1120 22:24:27.490539 1027933 start.go:93] Provisioning new machine with config: &{Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:24:27.490617 1027933 start.go:125] createHost starting for "" (driver="docker")
	I1120 22:24:29.059657 1024614 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 9.222942448s
	I1120 22:24:30.154316 1024614 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.318778762s
	I1120 22:24:30.337977 1024614 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.502414922s
	I1120 22:24:30.371434 1024614 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 22:24:30.396347 1024614 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 22:24:30.412048 1024614 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 22:24:30.412493 1024614 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-559701 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 22:24:30.438543 1024614 kubeadm.go:319] [bootstrap-token] Using token: 8g3yjd.gamjs6sh2fchfyu1
	I1120 22:24:30.441753 1024614 out.go:252]   - Configuring RBAC rules ...
	I1120 22:24:30.441881 1024614 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 22:24:30.448521 1024614 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 22:24:30.459294 1024614 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 22:24:30.466441 1024614 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 22:24:30.471305 1024614 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 22:24:30.476391 1024614 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 22:24:30.744700 1024614 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 22:24:31.329449 1024614 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 22:24:31.747012 1024614 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 22:24:31.748494 1024614 kubeadm.go:319] 
	I1120 22:24:31.748584 1024614 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 22:24:31.748591 1024614 kubeadm.go:319] 
	I1120 22:24:31.748672 1024614 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 22:24:31.748677 1024614 kubeadm.go:319] 
	I1120 22:24:31.748703 1024614 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 22:24:31.749161 1024614 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 22:24:31.749229 1024614 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 22:24:31.749235 1024614 kubeadm.go:319] 
	I1120 22:24:31.749292 1024614 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 22:24:31.749297 1024614 kubeadm.go:319] 
	I1120 22:24:31.749346 1024614 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 22:24:31.749351 1024614 kubeadm.go:319] 
	I1120 22:24:31.749405 1024614 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 22:24:31.749488 1024614 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 22:24:31.749560 1024614 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 22:24:31.749564 1024614 kubeadm.go:319] 
	I1120 22:24:31.749866 1024614 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 22:24:31.749949 1024614 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 22:24:31.749954 1024614 kubeadm.go:319] 
	I1120 22:24:31.750255 1024614 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 8g3yjd.gamjs6sh2fchfyu1 \
	I1120 22:24:31.750368 1024614 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 \
	I1120 22:24:31.750570 1024614 kubeadm.go:319] 	--control-plane 
	I1120 22:24:31.750584 1024614 kubeadm.go:319] 
	I1120 22:24:31.750880 1024614 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 22:24:31.750891 1024614 kubeadm.go:319] 
	I1120 22:24:31.751189 1024614 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 8g3yjd.gamjs6sh2fchfyu1 \
	I1120 22:24:31.751500 1024614 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 
	I1120 22:24:31.757208 1024614 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 22:24:31.757556 1024614 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 22:24:31.757715 1024614 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 22:24:31.757766 1024614 cni.go:84] Creating CNI manager for ""
	I1120 22:24:31.757788 1024614 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:24:31.778053 1024614 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 22:24:27.493942 1027933 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 22:24:27.494175 1027933 start.go:159] libmachine.API.Create for "embed-certs-270206" (driver="docker")
	I1120 22:24:27.494222 1027933 client.go:173] LocalClient.Create starting
	I1120 22:24:27.494301 1027933 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem
	I1120 22:24:27.494342 1027933 main.go:143] libmachine: Decoding PEM data...
	I1120 22:24:27.494358 1027933 main.go:143] libmachine: Parsing certificate...
	I1120 22:24:27.494432 1027933 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem
	I1120 22:24:27.494456 1027933 main.go:143] libmachine: Decoding PEM data...
	I1120 22:24:27.494473 1027933 main.go:143] libmachine: Parsing certificate...
	I1120 22:24:27.494841 1027933 cli_runner.go:164] Run: docker network inspect embed-certs-270206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 22:24:27.526914 1027933 cli_runner.go:211] docker network inspect embed-certs-270206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 22:24:27.527004 1027933 network_create.go:284] running [docker network inspect embed-certs-270206] to gather additional debugging logs...
	I1120 22:24:27.527022 1027933 cli_runner.go:164] Run: docker network inspect embed-certs-270206
	W1120 22:24:27.554140 1027933 cli_runner.go:211] docker network inspect embed-certs-270206 returned with exit code 1
	I1120 22:24:27.554168 1027933 network_create.go:287] error running [docker network inspect embed-certs-270206]: docker network inspect embed-certs-270206: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-270206 not found
	I1120 22:24:27.554181 1027933 network_create.go:289] output of [docker network inspect embed-certs-270206]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-270206 not found
	
	** /stderr **
	I1120 22:24:27.554287 1027933 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:24:27.588039 1027933 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad232b357b1b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:e5:2b:94:2e:bb} reservation:<nil>}
	I1120 22:24:27.588421 1027933 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6d47b47b5eb7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:61:6b:56:c9:db} reservation:<nil>}
	I1120 22:24:27.588678 1027933 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8999df1e8509 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:04:87:b7:55:e1} reservation:<nil>}
	I1120 22:24:27.589164 1027933 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d0780}
	I1120 22:24:27.589192 1027933 network_create.go:124] attempt to create docker network embed-certs-270206 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1120 22:24:27.589252 1027933 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-270206 embed-certs-270206
	I1120 22:24:27.681678 1027933 network_create.go:108] docker network embed-certs-270206 192.168.76.0/24 created
	I1120 22:24:27.681710 1027933 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-270206" container
	I1120 22:24:27.681782 1027933 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 22:24:27.711840 1027933 cli_runner.go:164] Run: docker volume create embed-certs-270206 --label name.minikube.sigs.k8s.io=embed-certs-270206 --label created_by.minikube.sigs.k8s.io=true
	I1120 22:24:27.748680 1027933 oci.go:103] Successfully created a docker volume embed-certs-270206
	I1120 22:24:27.748772 1027933 cli_runner.go:164] Run: docker run --rm --name embed-certs-270206-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-270206 --entrypoint /usr/bin/test -v embed-certs-270206:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 22:24:28.495195 1027933 oci.go:107] Successfully prepared a docker volume embed-certs-270206
	I1120 22:24:28.495281 1027933 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:24:28.495293 1027933 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 22:24:28.495357 1027933 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-270206:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 22:24:31.792153 1024614 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 22:24:31.796542 1024614 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 22:24:31.796564 1024614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 22:24:31.810878 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 22:24:32.273507 1024614 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 22:24:32.273695 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:32.273793 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-559701 minikube.k8s.io/updated_at=2025_11_20T22_24_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=default-k8s-diff-port-559701 minikube.k8s.io/primary=true
	I1120 22:24:32.293419 1024614 ops.go:34] apiserver oom_adj: -16
	I1120 22:24:32.418880 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:32.919080 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:33.419504 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:33.919399 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:34.419713 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:34.919099 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:35.419267 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:35.919443 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:36.419465 1024614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:24:36.588837 1024614 kubeadm.go:1114] duration metric: took 4.315188945s to wait for elevateKubeSystemPrivileges
	I1120 22:24:36.588865 1024614 kubeadm.go:403] duration metric: took 29.073032263s to StartCluster
	I1120 22:24:36.588882 1024614 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:36.588940 1024614 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:24:36.589631 1024614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:36.589841 1024614 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:24:36.589978 1024614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 22:24:36.590248 1024614 config.go:182] Loaded profile config "default-k8s-diff-port-559701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:24:36.590286 1024614 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:24:36.590345 1024614 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-559701"
	I1120 22:24:36.590358 1024614 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-559701"
	I1120 22:24:36.590378 1024614 host.go:66] Checking if "default-k8s-diff-port-559701" exists ...
	I1120 22:24:36.591385 1024614 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:24:36.591634 1024614 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-559701"
	I1120 22:24:36.591651 1024614 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-559701"
	I1120 22:24:36.591926 1024614 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:24:36.597233 1024614 out.go:179] * Verifying Kubernetes components...
	I1120 22:24:36.609256 1024614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:24:36.643208 1024614 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-559701"
	I1120 22:24:36.643277 1024614 host.go:66] Checking if "default-k8s-diff-port-559701" exists ...
	I1120 22:24:36.643746 1024614 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:24:36.643796 1024614 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:24:33.255813 1027933 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-270206:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.760419841s)
	I1120 22:24:33.255848 1027933 kic.go:203] duration metric: took 4.760551126s to extract preloaded images to volume ...
	W1120 22:24:33.255997 1027933 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 22:24:33.256110 1027933 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 22:24:33.316126 1027933 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-270206 --name embed-certs-270206 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-270206 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-270206 --network embed-certs-270206 --ip 192.168.76.2 --volume embed-certs-270206:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 22:24:33.668242 1027933 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Running}}
	I1120 22:24:33.695142 1027933 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:24:33.731844 1027933 cli_runner.go:164] Run: docker exec embed-certs-270206 stat /var/lib/dpkg/alternatives/iptables
	I1120 22:24:33.803421 1027933 oci.go:144] the created container "embed-certs-270206" has a running status.
	I1120 22:24:33.803452 1027933 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa...
	I1120 22:24:34.638083 1027933 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 22:24:34.661694 1027933 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:24:34.689489 1027933 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 22:24:34.689507 1027933 kic_runner.go:114] Args: [docker exec --privileged embed-certs-270206 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 22:24:34.766877 1027933 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:24:34.788794 1027933 machine.go:94] provisionDockerMachine start ...
	I1120 22:24:34.788899 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:24:34.815209 1027933 main.go:143] libmachine: Using SSH client type: native
	I1120 22:24:34.815578 1027933 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34172 <nil> <nil>}
	I1120 22:24:34.815595 1027933 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:24:35.001583 1027933 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-270206
	
	I1120 22:24:35.001611 1027933 ubuntu.go:182] provisioning hostname "embed-certs-270206"
	I1120 22:24:35.001684 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:24:35.030191 1027933 main.go:143] libmachine: Using SSH client type: native
	I1120 22:24:35.030496 1027933 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34172 <nil> <nil>}
	I1120 22:24:35.030507 1027933 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-270206 && echo "embed-certs-270206" | sudo tee /etc/hostname
	I1120 22:24:35.211483 1027933 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-270206
	
	I1120 22:24:35.211624 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:24:35.235286 1027933 main.go:143] libmachine: Using SSH client type: native
	I1120 22:24:35.235591 1027933 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34172 <nil> <nil>}
	I1120 22:24:35.235608 1027933 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-270206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-270206/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-270206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:24:35.399408 1027933 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:24:35.399496 1027933 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:24:35.399548 1027933 ubuntu.go:190] setting up certificates
	I1120 22:24:35.399582 1027933 provision.go:84] configureAuth start
	I1120 22:24:35.399661 1027933 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-270206
	I1120 22:24:35.417148 1027933 provision.go:143] copyHostCerts
	I1120 22:24:35.417214 1027933 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:24:35.417223 1027933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:24:35.417297 1027933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:24:35.417405 1027933 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:24:35.417410 1027933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:24:35.417437 1027933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:24:35.417496 1027933 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:24:35.417501 1027933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:24:35.417523 1027933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:24:35.417602 1027933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.embed-certs-270206 san=[127.0.0.1 192.168.76.2 embed-certs-270206 localhost minikube]
	I1120 22:24:36.240546 1027933 provision.go:177] copyRemoteCerts
	I1120 22:24:36.240672 1027933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:24:36.240735 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:24:36.277569 1027933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34172 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:24:36.383296 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:24:36.414968 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1120 22:24:36.451502 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:24:36.478695 1027933 provision.go:87] duration metric: took 1.079076393s to configureAuth
	I1120 22:24:36.478732 1027933 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:24:36.478931 1027933 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:24:36.479077 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:24:36.502886 1027933 main.go:143] libmachine: Using SSH client type: native
	I1120 22:24:36.503265 1027933 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34172 <nil> <nil>}
	I1120 22:24:36.503292 1027933 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:24:36.968214 1027933 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:24:36.968240 1027933 machine.go:97] duration metric: took 2.179427316s to provisionDockerMachine
	I1120 22:24:36.968251 1027933 client.go:176] duration metric: took 9.474017992s to LocalClient.Create
	I1120 22:24:36.968269 1027933 start.go:167] duration metric: took 9.474098567s to libmachine.API.Create "embed-certs-270206"
	I1120 22:24:36.968277 1027933 start.go:293] postStartSetup for "embed-certs-270206" (driver="docker")
	I1120 22:24:36.968287 1027933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:24:36.968357 1027933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:24:36.968405 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:24:37.002406 1027933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34172 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:24:36.647347 1024614 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:24:36.647381 1024614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:24:36.647461 1024614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:24:36.667515 1024614 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:24:36.667541 1024614 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:24:36.667610 1024614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:24:36.691160 1024614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:24:36.715167 1024614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:24:36.950501 1024614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 22:24:36.982548 1024614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:24:37.059858 1024614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:24:37.230685 1024614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:24:38.108093 1024614 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.157538366s)
	I1120 22:24:38.108123 1024614 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 22:24:38.590517 1024614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.607935022s)
	I1120 22:24:38.590575 1024614 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.530698221s)
	I1120 22:24:38.591348 1024614 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-559701" to be "Ready" ...
	I1120 22:24:38.591575 1024614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.360863354s)
	I1120 22:24:38.624023 1024614 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-559701" context rescaled to 1 replicas
	I1120 22:24:38.634872 1024614 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 22:24:37.126381 1027933 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:24:37.131399 1027933 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:24:37.131427 1027933 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:24:37.131438 1027933 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:24:37.131513 1027933 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:24:37.131601 1027933 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:24:37.131705 1027933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:24:37.142843 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:24:37.175253 1027933 start.go:296] duration metric: took 206.959734ms for postStartSetup
	I1120 22:24:37.175743 1027933 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-270206
	I1120 22:24:37.204803 1027933 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/config.json ...
	I1120 22:24:37.205137 1027933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:24:37.205185 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:24:37.236187 1027933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34172 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:24:37.349515 1027933 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:24:37.360886 1027933 start.go:128] duration metric: took 9.870254467s to createHost
	I1120 22:24:37.360909 1027933 start.go:83] releasing machines lock for "embed-certs-270206", held for 9.870401817s
	I1120 22:24:37.360990 1027933 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-270206
	I1120 22:24:37.389328 1027933 ssh_runner.go:195] Run: cat /version.json
	I1120 22:24:37.389395 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:24:37.389705 1027933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:24:37.389779 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:24:37.431202 1027933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34172 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:24:37.444295 1027933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34172 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:24:37.562864 1027933 ssh_runner.go:195] Run: systemctl --version
	I1120 22:24:37.734319 1027933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:24:37.796069 1027933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:24:37.801200 1027933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:24:37.801291 1027933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:24:37.841085 1027933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 22:24:37.841113 1027933 start.go:496] detecting cgroup driver to use...
	I1120 22:24:37.841168 1027933 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:24:37.841234 1027933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:24:37.867434 1027933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:24:37.889743 1027933 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:24:37.889829 1027933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:24:37.914156 1027933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:24:37.943281 1027933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:24:38.161717 1027933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:24:38.349538 1027933 docker.go:234] disabling docker service ...
	I1120 22:24:38.349634 1027933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:24:38.385735 1027933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:24:38.416348 1027933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:24:38.578342 1027933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:24:38.740423 1027933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:24:38.762225 1027933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:24:38.792747 1027933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:24:38.792869 1027933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:24:38.806694 1027933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:24:38.806810 1027933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:24:38.820445 1027933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:24:38.838884 1027933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:24:38.848903 1027933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:24:38.858254 1027933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:24:38.867452 1027933 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:24:38.882587 1027933 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:24:38.891998 1027933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:24:38.900940 1027933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:24:38.909942 1027933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:24:39.055697 1027933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:24:39.251289 1027933 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:24:39.251409 1027933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:24:39.255834 1027933 start.go:564] Will wait 60s for crictl version
	I1120 22:24:39.255949 1027933 ssh_runner.go:195] Run: which crictl
	I1120 22:24:39.259592 1027933 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:24:39.285586 1027933 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:24:39.285745 1027933 ssh_runner.go:195] Run: crio --version
	I1120 22:24:39.314482 1027933 ssh_runner.go:195] Run: crio --version
	I1120 22:24:39.348707 1027933 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:24:39.351724 1027933 cli_runner.go:164] Run: docker network inspect embed-certs-270206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:24:39.367689 1027933 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 22:24:39.371597 1027933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:24:39.381549 1027933 kubeadm.go:884] updating cluster {Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:24:39.381677 1027933 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:24:39.381736 1027933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:24:39.415296 1027933 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:24:39.415321 1027933 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:24:39.415378 1027933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:24:39.451544 1027933 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:24:39.451565 1027933 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:24:39.451573 1027933 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1120 22:24:39.451653 1027933 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-270206 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:24:39.451732 1027933 ssh_runner.go:195] Run: crio config
	I1120 22:24:39.516016 1027933 cni.go:84] Creating CNI manager for ""
	I1120 22:24:39.516037 1027933 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:24:39.516050 1027933 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:24:39.516075 1027933 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-270206 NodeName:embed-certs-270206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:24:39.516206 1027933 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-270206"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:24:39.516285 1027933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:24:39.524390 1027933 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:24:39.524507 1027933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:24:39.531977 1027933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 22:24:39.545319 1027933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:24:39.558964 1027933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1120 22:24:39.573338 1027933 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:24:39.576970 1027933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:24:39.588061 1027933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:24:39.699581 1027933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:24:39.716503 1027933 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206 for IP: 192.168.76.2
	I1120 22:24:39.716534 1027933 certs.go:195] generating shared ca certs ...
	I1120 22:24:39.716550 1027933 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:39.716688 1027933 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:24:39.716734 1027933 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:24:39.716746 1027933 certs.go:257] generating profile certs ...
	I1120 22:24:39.716810 1027933 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/client.key
	I1120 22:24:39.716827 1027933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/client.crt with IP's: []
	I1120 22:24:40.446697 1027933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/client.crt ...
	I1120 22:24:40.446730 1027933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/client.crt: {Name:mk28e0674dd772e36e8d05c33b37b3facc13eab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:40.446948 1027933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/client.key ...
	I1120 22:24:40.446964 1027933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/client.key: {Name:mkb0873ec045a485fa5fa61aeb525608668cb401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:40.447090 1027933 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key.ed27b386
	I1120 22:24:40.447113 1027933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.crt.ed27b386 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1120 22:24:41.524627 1027933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.crt.ed27b386 ...
	I1120 22:24:41.524663 1027933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.crt.ed27b386: {Name:mk8e3415988b7bb0b7338314e7054a6a19193787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:41.524866 1027933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key.ed27b386 ...
	I1120 22:24:41.524881 1027933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key.ed27b386: {Name:mk5f279dbd0da01683a3493ebd7078460dbdffbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:41.524965 1027933 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.crt.ed27b386 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.crt
	I1120 22:24:41.525050 1027933 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key.ed27b386 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key
	I1120 22:24:41.525115 1027933 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.key
	I1120 22:24:41.525133 1027933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.crt with IP's: []
	I1120 22:24:41.900512 1027933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.crt ...
	I1120 22:24:41.900544 1027933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.crt: {Name:mk6ac460116718682165000293a3edd54a638379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:41.900743 1027933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.key ...
	I1120 22:24:41.900757 1027933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.key: {Name:mkcc3f22e112aa450bae7508374c16e4bde1f426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:24:41.900955 1027933 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:24:41.900997 1027933 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:24:41.901015 1027933 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:24:41.901042 1027933 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:24:41.901070 1027933 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:24:41.901094 1027933 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:24:41.901141 1027933 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:24:41.901730 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:24:41.919618 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:24:41.940363 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:24:41.962338 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:24:41.982766 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1120 22:24:42.001445 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:24:42.028785 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:24:42.048251 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 22:24:42.068227 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:24:42.090364 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:24:42.114380 1027933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:24:42.136993 1027933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:24:42.153568 1027933 ssh_runner.go:195] Run: openssl version
	I1120 22:24:42.161125 1027933 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:24:42.171031 1027933 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:24:42.181333 1027933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:24:42.187570 1027933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:24:42.187737 1027933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:24:42.232663 1027933 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:24:42.241649 1027933 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8368522.pem /etc/ssl/certs/3ec20f2e.0
	I1120 22:24:42.250553 1027933 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:24:42.259086 1027933 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:24:42.268478 1027933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:24:42.273059 1027933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:24:42.273180 1027933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:24:42.315739 1027933 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:24:42.324194 1027933 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 22:24:42.333742 1027933 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:24:42.341574 1027933 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:24:42.349463 1027933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:24:42.353325 1027933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:24:42.353427 1027933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:24:42.394705 1027933 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:24:42.402425 1027933 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/836852.pem /etc/ssl/certs/51391683.0
	I1120 22:24:42.409986 1027933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:24:42.413531 1027933 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 22:24:42.413583 1027933 kubeadm.go:401] StartCluster: {Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:24:42.413662 1027933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:24:42.413725 1027933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:24:42.451672 1027933 cri.go:89] found id: ""
	I1120 22:24:42.451746 1027933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:24:42.459750 1027933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 22:24:42.468393 1027933 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 22:24:42.468465 1027933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 22:24:42.476625 1027933 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 22:24:42.476645 1027933 kubeadm.go:158] found existing configuration files:
	
	I1120 22:24:42.476703 1027933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 22:24:42.484623 1027933 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 22:24:42.484693 1027933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 22:24:42.492206 1027933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 22:24:42.500561 1027933 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 22:24:42.500628 1027933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 22:24:42.509257 1027933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 22:24:42.516911 1027933 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 22:24:42.516977 1027933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 22:24:42.524761 1027933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 22:24:42.532933 1027933 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 22:24:42.532997 1027933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 22:24:42.540596 1027933 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 22:24:42.580831 1027933 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 22:24:42.581038 1027933 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 22:24:42.609459 1027933 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 22:24:42.609536 1027933 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 22:24:42.609576 1027933 kubeadm.go:319] OS: Linux
	I1120 22:24:42.609626 1027933 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 22:24:42.609677 1027933 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 22:24:42.609728 1027933 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 22:24:42.609780 1027933 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 22:24:42.609831 1027933 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 22:24:42.609882 1027933 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 22:24:42.609929 1027933 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 22:24:42.609981 1027933 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 22:24:42.610031 1027933 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 22:24:42.694718 1027933 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 22:24:42.694842 1027933 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 22:24:42.695017 1027933 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 22:24:42.707391 1027933 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 22:24:38.638154 1024614 addons.go:515] duration metric: took 2.04784798s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1120 22:24:40.596277 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:24:43.094903 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	I1120 22:24:42.713160 1027933 out.go:252]   - Generating certificates and keys ...
	I1120 22:24:42.713272 1027933 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 22:24:42.713363 1027933 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 22:24:44.301553 1027933 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 22:24:44.974418 1027933 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 22:24:45.596423 1027933 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 22:24:46.287258 1027933 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 22:24:46.496887 1027933 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 22:24:46.497504 1027933 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-270206 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1120 22:24:45.121035 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:24:47.595183 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	I1120 22:24:47.610432 1027933 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 22:24:47.610790 1027933 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-270206 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 22:24:48.051673 1027933 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 22:24:49.756440 1027933 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 22:24:50.219056 1027933 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 22:24:50.219549 1027933 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 22:24:50.510008 1027933 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 22:24:51.246955 1027933 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 22:24:51.560407 1027933 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 22:24:52.067191 1027933 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 22:24:53.232022 1027933 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 22:24:53.232632 1027933 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 22:24:53.235280 1027933 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1120 22:24:50.095181 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:24:52.595313 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	I1120 22:24:53.238875 1027933 out.go:252]   - Booting up control plane ...
	I1120 22:24:53.239012 1027933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 22:24:53.239097 1027933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 22:24:53.239168 1027933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 22:24:53.259016 1027933 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 22:24:53.260900 1027933 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 22:24:53.270774 1027933 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 22:24:53.272843 1027933 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 22:24:53.272909 1027933 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 22:24:53.408883 1027933 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 22:24:53.409010 1027933 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 22:24:55.411382 1027933 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001051428s
	I1120 22:24:55.415402 1027933 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 22:24:55.415733 1027933 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1120 22:24:55.416092 1027933 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 22:24:55.416854 1027933 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1120 22:24:54.596429 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:24:57.094478 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	I1120 22:24:59.220288 1027933 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.803035671s
	I1120 22:25:01.463446 1027933 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.045932352s
	I1120 22:25:01.918726 1027933 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502223917s
	I1120 22:25:01.938784 1027933 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 22:25:01.955210 1027933 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 22:25:01.976736 1027933 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 22:25:01.977287 1027933 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-270206 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 22:25:01.993572 1027933 kubeadm.go:319] [bootstrap-token] Using token: rkz0bs.vgjurybo9mlbp8ew
	I1120 22:25:01.996795 1027933 out.go:252]   - Configuring RBAC rules ...
	I1120 22:25:01.997006 1027933 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 22:25:02.002725 1027933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 22:25:02.029338 1027933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 22:25:02.039643 1027933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 22:25:02.047417 1027933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 22:25:02.052865 1027933 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	W1120 22:24:59.095254 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:25:01.594853 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	I1120 22:25:02.325394 1027933 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 22:25:02.769707 1027933 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 22:25:03.328193 1027933 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 22:25:03.329362 1027933 kubeadm.go:319] 
	I1120 22:25:03.329436 1027933 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 22:25:03.329450 1027933 kubeadm.go:319] 
	I1120 22:25:03.329529 1027933 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 22:25:03.329539 1027933 kubeadm.go:319] 
	I1120 22:25:03.329565 1027933 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 22:25:03.329627 1027933 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 22:25:03.329682 1027933 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 22:25:03.329691 1027933 kubeadm.go:319] 
	I1120 22:25:03.329746 1027933 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 22:25:03.329754 1027933 kubeadm.go:319] 
	I1120 22:25:03.329802 1027933 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 22:25:03.329810 1027933 kubeadm.go:319] 
	I1120 22:25:03.329862 1027933 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 22:25:03.329944 1027933 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 22:25:03.330018 1027933 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 22:25:03.330027 1027933 kubeadm.go:319] 
	I1120 22:25:03.330111 1027933 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 22:25:03.330191 1027933 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 22:25:03.330200 1027933 kubeadm.go:319] 
	I1120 22:25:03.330283 1027933 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rkz0bs.vgjurybo9mlbp8ew \
	I1120 22:25:03.330390 1027933 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 \
	I1120 22:25:03.330415 1027933 kubeadm.go:319] 	--control-plane 
	I1120 22:25:03.330424 1027933 kubeadm.go:319] 
	I1120 22:25:03.330509 1027933 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 22:25:03.330527 1027933 kubeadm.go:319] 
	I1120 22:25:03.330613 1027933 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rkz0bs.vgjurybo9mlbp8ew \
	I1120 22:25:03.330718 1027933 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 
	I1120 22:25:03.334270 1027933 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 22:25:03.334511 1027933 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 22:25:03.334624 1027933 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 22:25:03.334644 1027933 cni.go:84] Creating CNI manager for ""
	I1120 22:25:03.334652 1027933 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:25:03.339646 1027933 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 22:25:03.342595 1027933 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 22:25:03.346941 1027933 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 22:25:03.346965 1027933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 22:25:03.360236 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 22:25:03.696546 1027933 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 22:25:03.696694 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:03.696758 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-270206 minikube.k8s.io/updated_at=2025_11_20T22_25_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=embed-certs-270206 minikube.k8s.io/primary=true
	I1120 22:25:03.860922 1027933 ops.go:34] apiserver oom_adj: -16
	I1120 22:25:03.861025 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:04.361936 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:04.861134 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:05.361128 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:05.861114 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:06.361175 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:06.861144 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1120 22:25:03.595519 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:25:06.095328 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	I1120 22:25:07.361858 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:07.861247 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:08.361967 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:08.861651 1027933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:25:09.033824 1027933 kubeadm.go:1114] duration metric: took 5.337177891s to wait for elevateKubeSystemPrivileges
	I1120 22:25:09.033854 1027933 kubeadm.go:403] duration metric: took 26.620275635s to StartCluster
	I1120 22:25:09.033874 1027933 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:25:09.033949 1027933 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:25:09.036337 1027933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:25:09.036642 1027933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 22:25:09.036672 1027933 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:25:09.036932 1027933 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:25:09.036966 1027933 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:25:09.037037 1027933 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-270206"
	I1120 22:25:09.037053 1027933 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-270206"
	I1120 22:25:09.037074 1027933 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:25:09.037540 1027933 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:25:09.037987 1027933 addons.go:70] Setting default-storageclass=true in profile "embed-certs-270206"
	I1120 22:25:09.038017 1027933 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-270206"
	I1120 22:25:09.038310 1027933 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:25:09.044880 1027933 out.go:179] * Verifying Kubernetes components...
	I1120 22:25:09.048441 1027933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:25:09.078680 1027933 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:25:09.080753 1027933 addons.go:239] Setting addon default-storageclass=true in "embed-certs-270206"
	I1120 22:25:09.080898 1027933 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:25:09.081335 1027933 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:25:09.081805 1027933 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:25:09.081829 1027933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:25:09.081876 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:25:09.133401 1027933 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:25:09.133429 1027933 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:25:09.133503 1027933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:25:09.145959 1027933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34172 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:25:09.172819 1027933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34172 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:25:09.458961 1027933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:25:09.465607 1027933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:25:09.480814 1027933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 22:25:09.480974 1027933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:25:10.283733 1027933 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 22:25:10.284704 1027933 node_ready.go:35] waiting up to 6m0s for node "embed-certs-270206" to be "Ready" ...
	I1120 22:25:10.287074 1027933 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1120 22:25:10.289968 1027933 addons.go:515] duration metric: took 1.252980649s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1120 22:25:10.788175 1027933 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-270206" context rescaled to 1 replicas
	W1120 22:25:08.594798 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:25:10.595122 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:25:13.095493 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:25:12.291348 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	W1120 22:25:14.788155 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	W1120 22:25:16.788239 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	W1120 22:25:15.593899 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	W1120 22:25:17.594778 1024614 node_ready.go:57] node "default-k8s-diff-port-559701" has "Ready":"False" status (will retry)
	I1120 22:25:19.094432 1024614 node_ready.go:49] node "default-k8s-diff-port-559701" is "Ready"
	I1120 22:25:19.094467 1024614 node_ready.go:38] duration metric: took 40.503090988s for node "default-k8s-diff-port-559701" to be "Ready" ...
	I1120 22:25:19.094482 1024614 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:25:19.094537 1024614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:25:19.113973 1024614 api_server.go:72] duration metric: took 42.524104408s to wait for apiserver process to appear ...
	I1120 22:25:19.114010 1024614 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:25:19.114034 1024614 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 22:25:19.124982 1024614 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1120 22:25:19.127739 1024614 api_server.go:141] control plane version: v1.34.1
	I1120 22:25:19.127772 1024614 api_server.go:131] duration metric: took 13.753145ms to wait for apiserver health ...
	I1120 22:25:19.127782 1024614 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:25:19.133689 1024614 system_pods.go:59] 8 kube-system pods found
	I1120 22:25:19.133729 1024614 system_pods.go:61] "coredns-66bc5c9577-kdh8n" [de537859-5578-4115-9ba0-2986fae8dd40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:25:19.133737 1024614 system_pods.go:61] "etcd-default-k8s-diff-port-559701" [e687ca20-3324-4c9c-b307-fee4fc26e1cf] Running
	I1120 22:25:19.133744 1024614 system_pods.go:61] "kindnet-4g2sr" [d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f] Running
	I1120 22:25:19.133749 1024614 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-559701" [17d5e32d-ee38-4946-86f2-794e63bbd380] Running
	I1120 22:25:19.133754 1024614 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-559701" [1fa7b16a-d534-4bb5-bef9-5260b007621b] Running
	I1120 22:25:19.133759 1024614 system_pods.go:61] "kube-proxy-q6lq4" [b967db15-fd3c-4e36-939e-20736efe8c42] Running
	I1120 22:25:19.133765 1024614 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-559701" [606fc9b9-8d24-4909-b4a3-d2de3cdb9d2c] Running
	I1120 22:25:19.133771 1024614 system_pods.go:61] "storage-provisioner" [4edff332-d2b5-4acb-b661-f55dab3a9af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:25:19.133785 1024614 system_pods.go:74] duration metric: took 5.997148ms to wait for pod list to return data ...
	I1120 22:25:19.133795 1024614 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:25:19.138762 1024614 default_sa.go:45] found service account: "default"
	I1120 22:25:19.138801 1024614 default_sa.go:55] duration metric: took 4.996208ms for default service account to be created ...
	I1120 22:25:19.138811 1024614 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:25:19.144665 1024614 system_pods.go:86] 8 kube-system pods found
	I1120 22:25:19.144700 1024614 system_pods.go:89] "coredns-66bc5c9577-kdh8n" [de537859-5578-4115-9ba0-2986fae8dd40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:25:19.144706 1024614 system_pods.go:89] "etcd-default-k8s-diff-port-559701" [e687ca20-3324-4c9c-b307-fee4fc26e1cf] Running
	I1120 22:25:19.144713 1024614 system_pods.go:89] "kindnet-4g2sr" [d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f] Running
	I1120 22:25:19.144718 1024614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-559701" [17d5e32d-ee38-4946-86f2-794e63bbd380] Running
	I1120 22:25:19.144731 1024614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-559701" [1fa7b16a-d534-4bb5-bef9-5260b007621b] Running
	I1120 22:25:19.144739 1024614 system_pods.go:89] "kube-proxy-q6lq4" [b967db15-fd3c-4e36-939e-20736efe8c42] Running
	I1120 22:25:19.144743 1024614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-559701" [606fc9b9-8d24-4909-b4a3-d2de3cdb9d2c] Running
	I1120 22:25:19.144756 1024614 system_pods.go:89] "storage-provisioner" [4edff332-d2b5-4acb-b661-f55dab3a9af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:25:19.144776 1024614 retry.go:31] will retry after 257.667791ms: missing components: kube-dns
	I1120 22:25:19.407071 1024614 system_pods.go:86] 8 kube-system pods found
	I1120 22:25:19.407109 1024614 system_pods.go:89] "coredns-66bc5c9577-kdh8n" [de537859-5578-4115-9ba0-2986fae8dd40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:25:19.407117 1024614 system_pods.go:89] "etcd-default-k8s-diff-port-559701" [e687ca20-3324-4c9c-b307-fee4fc26e1cf] Running
	I1120 22:25:19.407161 1024614 system_pods.go:89] "kindnet-4g2sr" [d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f] Running
	I1120 22:25:19.407166 1024614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-559701" [17d5e32d-ee38-4946-86f2-794e63bbd380] Running
	I1120 22:25:19.407171 1024614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-559701" [1fa7b16a-d534-4bb5-bef9-5260b007621b] Running
	I1120 22:25:19.407180 1024614 system_pods.go:89] "kube-proxy-q6lq4" [b967db15-fd3c-4e36-939e-20736efe8c42] Running
	I1120 22:25:19.407184 1024614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-559701" [606fc9b9-8d24-4909-b4a3-d2de3cdb9d2c] Running
	I1120 22:25:19.407190 1024614 system_pods.go:89] "storage-provisioner" [4edff332-d2b5-4acb-b661-f55dab3a9af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:25:19.407225 1024614 retry.go:31] will retry after 269.41504ms: missing components: kube-dns
	I1120 22:25:19.681513 1024614 system_pods.go:86] 8 kube-system pods found
	I1120 22:25:19.681552 1024614 system_pods.go:89] "coredns-66bc5c9577-kdh8n" [de537859-5578-4115-9ba0-2986fae8dd40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:25:19.681559 1024614 system_pods.go:89] "etcd-default-k8s-diff-port-559701" [e687ca20-3324-4c9c-b307-fee4fc26e1cf] Running
	I1120 22:25:19.681565 1024614 system_pods.go:89] "kindnet-4g2sr" [d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f] Running
	I1120 22:25:19.681570 1024614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-559701" [17d5e32d-ee38-4946-86f2-794e63bbd380] Running
	I1120 22:25:19.681576 1024614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-559701" [1fa7b16a-d534-4bb5-bef9-5260b007621b] Running
	I1120 22:25:19.681580 1024614 system_pods.go:89] "kube-proxy-q6lq4" [b967db15-fd3c-4e36-939e-20736efe8c42] Running
	I1120 22:25:19.681584 1024614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-559701" [606fc9b9-8d24-4909-b4a3-d2de3cdb9d2c] Running
	I1120 22:25:19.681590 1024614 system_pods.go:89] "storage-provisioner" [4edff332-d2b5-4acb-b661-f55dab3a9af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:25:19.681606 1024614 retry.go:31] will retry after 460.711693ms: missing components: kube-dns
	I1120 22:25:20.148258 1024614 system_pods.go:86] 8 kube-system pods found
	I1120 22:25:20.148296 1024614 system_pods.go:89] "coredns-66bc5c9577-kdh8n" [de537859-5578-4115-9ba0-2986fae8dd40] Running
	I1120 22:25:20.148304 1024614 system_pods.go:89] "etcd-default-k8s-diff-port-559701" [e687ca20-3324-4c9c-b307-fee4fc26e1cf] Running
	I1120 22:25:20.148310 1024614 system_pods.go:89] "kindnet-4g2sr" [d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f] Running
	I1120 22:25:20.148315 1024614 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-559701" [17d5e32d-ee38-4946-86f2-794e63bbd380] Running
	I1120 22:25:20.148319 1024614 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-559701" [1fa7b16a-d534-4bb5-bef9-5260b007621b] Running
	I1120 22:25:20.148323 1024614 system_pods.go:89] "kube-proxy-q6lq4" [b967db15-fd3c-4e36-939e-20736efe8c42] Running
	I1120 22:25:20.148327 1024614 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-559701" [606fc9b9-8d24-4909-b4a3-d2de3cdb9d2c] Running
	I1120 22:25:20.148333 1024614 system_pods.go:89] "storage-provisioner" [4edff332-d2b5-4acb-b661-f55dab3a9af5] Running
	I1120 22:25:20.148341 1024614 system_pods.go:126] duration metric: took 1.0095242s to wait for k8s-apps to be running ...
	I1120 22:25:20.148355 1024614 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:25:20.148420 1024614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:25:20.162210 1024614 system_svc.go:56] duration metric: took 13.844519ms WaitForService to wait for kubelet
	I1120 22:25:20.162241 1024614 kubeadm.go:587] duration metric: took 43.572377659s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:25:20.162260 1024614 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:25:20.165881 1024614 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:25:20.165914 1024614 node_conditions.go:123] node cpu capacity is 2
	I1120 22:25:20.165948 1024614 node_conditions.go:105] duration metric: took 3.66289ms to run NodePressure ...
	I1120 22:25:20.165966 1024614 start.go:242] waiting for startup goroutines ...
	I1120 22:25:20.165977 1024614 start.go:247] waiting for cluster config update ...
	I1120 22:25:20.165989 1024614 start.go:256] writing updated cluster config ...
	I1120 22:25:20.166304 1024614 ssh_runner.go:195] Run: rm -f paused
	I1120 22:25:20.170550 1024614 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:25:20.174872 1024614 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kdh8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:20.179931 1024614 pod_ready.go:94] pod "coredns-66bc5c9577-kdh8n" is "Ready"
	I1120 22:25:20.179960 1024614 pod_ready.go:86] duration metric: took 5.055384ms for pod "coredns-66bc5c9577-kdh8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:20.182600 1024614 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:20.188101 1024614 pod_ready.go:94] pod "etcd-default-k8s-diff-port-559701" is "Ready"
	I1120 22:25:20.188131 1024614 pod_ready.go:86] duration metric: took 5.505843ms for pod "etcd-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:20.191361 1024614 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:20.197104 1024614 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-559701" is "Ready"
	I1120 22:25:20.197139 1024614 pod_ready.go:86] duration metric: took 5.749432ms for pod "kube-apiserver-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:20.199708 1024614 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:20.575978 1024614 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-559701" is "Ready"
	I1120 22:25:20.576009 1024614 pod_ready.go:86] duration metric: took 376.272951ms for pod "kube-controller-manager-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:20.775470 1024614 pod_ready.go:83] waiting for pod "kube-proxy-q6lq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:21.175325 1024614 pod_ready.go:94] pod "kube-proxy-q6lq4" is "Ready"
	I1120 22:25:21.175353 1024614 pod_ready.go:86] duration metric: took 399.853193ms for pod "kube-proxy-q6lq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:21.376538 1024614 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:21.774598 1024614 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-559701" is "Ready"
	I1120 22:25:21.774630 1024614 pod_ready.go:86] duration metric: took 398.063418ms for pod "kube-scheduler-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:21.774643 1024614 pod_ready.go:40] duration metric: took 1.604056787s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:25:21.846052 1024614 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:25:21.849879 1024614 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-559701" cluster and "default" namespace by default
	W1120 22:25:19.288305 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	W1120 22:25:21.787936 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	W1120 22:25:23.788128 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	W1120 22:25:25.788350 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	W1120 22:25:28.288277 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	W1120 22:25:30.288425 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 20 22:25:19 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:19.171285376Z" level=info msg="Created container 7fa8cf658055ba293a50714748d9ecb6132dcfea88ee7381195d26a0f5f42601: kube-system/coredns-66bc5c9577-kdh8n/coredns" id=a39ae86c-22f0-468d-a663-cd42881bc350 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:25:19 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:19.174829192Z" level=info msg="Starting container: 7fa8cf658055ba293a50714748d9ecb6132dcfea88ee7381195d26a0f5f42601" id=b03fa7a1-2b9d-47ab-a88f-bc336d290c3a name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:25:19 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:19.177435141Z" level=info msg="Started container" PID=1769 containerID=7fa8cf658055ba293a50714748d9ecb6132dcfea88ee7381195d26a0f5f42601 description=kube-system/coredns-66bc5c9577-kdh8n/coredns id=b03fa7a1-2b9d-47ab-a88f-bc336d290c3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=40e5f5b5db7716d60d0d605685365dcdcf0075c183e694ce9d32c949d4e78fe4
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.387588962Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0f979079-81c1-4805-af64-2b4453f32432 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.38766509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.393021934Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:11b78bcefff4fa9729819e1585fc5f17014dfc63f8a10ae78f655d37aacad685 UID:4f7c04d2-4cac-444d-82be-6529560dd56c NetNS:/var/run/netns/f752aa1f-124b-4317-b241-0ade27f63e7b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000792e0}] Aliases:map[]}"
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.393072118Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.404916372Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:11b78bcefff4fa9729819e1585fc5f17014dfc63f8a10ae78f655d37aacad685 UID:4f7c04d2-4cac-444d-82be-6529560dd56c NetNS:/var/run/netns/f752aa1f-124b-4317-b241-0ade27f63e7b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000792e0}] Aliases:map[]}"
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.405141787Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.408040047Z" level=info msg="Ran pod sandbox 11b78bcefff4fa9729819e1585fc5f17014dfc63f8a10ae78f655d37aacad685 with infra container: default/busybox/POD" id=0f979079-81c1-4805-af64-2b4453f32432 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.409174003Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c9adffb5-d53f-4973-9c7a-51d76c092f67 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.409296482Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c9adffb5-d53f-4973-9c7a-51d76c092f67 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.409351474Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c9adffb5-d53f-4973-9c7a-51d76c092f67 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.412190468Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=11989bba-3947-4db5-83b7-062eb7df5898 name=/runtime.v1.ImageService/PullImage
	Nov 20 22:25:22 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:22.414935923Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.417159144Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=11989bba-3947-4db5-83b7-062eb7df5898 name=/runtime.v1.ImageService/PullImage
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.41887645Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6245cdd9-be4d-4ce5-9f15-0af7b4cbea4a name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.427170234Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=be714167-1580-4a0c-8250-68938757ad55 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.435456183Z" level=info msg="Creating container: default/busybox/busybox" id=9af01d1a-fb4a-45ef-a825-51357bc6de03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.435578925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.440901611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.441626716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.458566322Z" level=info msg="Created container 7ea271f8f49a442e9f63714eae1b39bc7b1070d110ff0a89b1548aec0f21688a: default/busybox/busybox" id=9af01d1a-fb4a-45ef-a825-51357bc6de03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.461445646Z" level=info msg="Starting container: 7ea271f8f49a442e9f63714eae1b39bc7b1070d110ff0a89b1548aec0f21688a" id=b128a82d-ab85-41af-aa69-09c41f118493 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:25:24 default-k8s-diff-port-559701 crio[834]: time="2025-11-20T22:25:24.465558298Z" level=info msg="Started container" PID=1826 containerID=7ea271f8f49a442e9f63714eae1b39bc7b1070d110ff0a89b1548aec0f21688a description=default/busybox/busybox id=b128a82d-ab85-41af-aa69-09c41f118493 name=/runtime.v1.RuntimeService/StartContainer sandboxID=11b78bcefff4fa9729819e1585fc5f17014dfc63f8a10ae78f655d37aacad685
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	7ea271f8f49a4       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   11b78bcefff4f       busybox                                                default
	7fa8cf658055b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   40e5f5b5db771       coredns-66bc5c9577-kdh8n                               kube-system
	d555ebf51eb8d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   fc28ad725c868       storage-provisioner                                    kube-system
	fbe1643c4b7a2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   505b2ecf501d3       kube-proxy-q6lq4                                       kube-system
	658981bc4bcff       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   59b4efe724103       kindnet-4g2sr                                          kube-system
	43448090389d9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   c5a55ae96bb9f       kube-apiserver-default-k8s-diff-port-559701            kube-system
	b31b6b4035fe9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   0e6757209b741       kube-controller-manager-default-k8s-diff-port-559701   kube-system
	08b3a77366f46       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   6c9a8c982cc89       etcd-default-k8s-diff-port-559701                      kube-system
	36dee8043ffdd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   5e0b6c17c9277       kube-scheduler-default-k8s-diff-port-559701            kube-system
	
	
	==> coredns [7fa8cf658055ba293a50714748d9ecb6132dcfea88ee7381195d26a0f5f42601] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55932 - 27918 "HINFO IN 1756540930455879405.6837609514474129978. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020991843s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-559701
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-559701
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-559701
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:24:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-559701
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:25:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:25:18 +0000   Thu, 20 Nov 2025 22:24:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:25:18 +0000   Thu, 20 Nov 2025 22:24:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:25:18 +0000   Thu, 20 Nov 2025 22:24:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:25:18 +0000   Thu, 20 Nov 2025 22:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-559701
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                e075c442-07ea-4bfb-b4b4-14ea51a97fa9
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-kdh8n                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-559701                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-4g2sr                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-559701             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-559701    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-q6lq4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-559701             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 73s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-559701 event: Registered Node default-k8s-diff-port-559701 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-559701 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 22:00] overlayfs: idmapped layers are currently not supported
	[Nov20 22:01] overlayfs: idmapped layers are currently not supported
	[Nov20 22:02] overlayfs: idmapped layers are currently not supported
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [08b3a77366f46b1bc714711e1ed830eacffb002eb9fd17c9e006e6130da63647] <==
	{"level":"warn","ts":"2025-11-20T22:24:24.893652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:24.967218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.033960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.097385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.139468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.155808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.180081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.210170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.239288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.275208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.299766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.345381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.361830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.398194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.435081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.450886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.509687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.509900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.530593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.558651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.594264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.627083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.677537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.682303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:25.865823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41670","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:25:32 up  5:07,  0 user,  load average: 3.47, 3.27, 2.62
	Linux default-k8s-diff-port-559701 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [658981bc4bcfff5550c12fb46c0262cf9d6e4c583dca34dc82e1a49b39cb02ce] <==
	I1120 22:24:37.922576       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:24:38.007409       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:24:38.007584       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:24:38.007598       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:24:38.007614       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:24:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:24:38.221402       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:24:38.221473       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:24:38.221522       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:24:38.222289       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:25:08.222160       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:25:08.222160       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:25:08.222272       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:25:08.222343       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 22:25:09.522524       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:25:09.522569       1 metrics.go:72] Registering metrics
	I1120 22:25:09.522646       1 controller.go:711] "Syncing nftables rules"
	I1120 22:25:18.224439       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:25:18.224497       1 main.go:301] handling current node
	I1120 22:25:28.223071       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:25:28.223106       1 main.go:301] handling current node
	
	
	==> kube-apiserver [43448090389d99dd6c2e847a4b5589de8ab4051896d8217e516a5fcc97af410b] <==
	I1120 22:24:27.554792       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 22:24:27.559515       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:24:27.585678       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:24:27.619436       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:24:27.625514       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 22:24:27.653897       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:24:27.654118       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:24:28.324818       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 22:24:28.389214       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 22:24:28.389466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:24:30.093725       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:24:30.210244       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:24:30.327952       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 22:24:30.342627       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 22:24:30.343928       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:24:30.365057       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:24:31.107360       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:24:31.249998       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:24:31.328228       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 22:24:31.358835       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 22:24:36.316735       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:24:36.339891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:24:36.908171       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:24:37.206822       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1120 22:25:31.225200       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:59456: use of closed network connection
	
	
	==> kube-controller-manager [b31b6b4035fe9ccd1104341e798f8e73614b7a630118f0a9375caa2ffd6cc1fd] <==
	I1120 22:24:36.262271       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 22:24:36.267284       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:24:36.267305       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:24:36.271004       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 22:24:36.271640       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-559701" podCIDRs=["10.244.0.0/24"]
	I1120 22:24:36.271769       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 22:24:36.271806       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 22:24:36.272981       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 22:24:36.273502       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 22:24:36.276169       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:24:36.287607       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 22:24:36.292489       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 22:24:36.292587       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 22:24:36.296227       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 22:24:36.296787       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 22:24:36.299514       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 22:24:36.299544       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 22:24:36.300867       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 22:24:36.301486       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-559701"
	I1120 22:24:36.301524       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 22:24:36.299971       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 22:24:36.326067       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:24:36.326176       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:24:36.326224       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:25:21.308707       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fbe1643c4b7a29cdf87fb3a3fe956858d4ebc2dab6eb13763d7b0fd181cc65b4] <==
	I1120 22:24:38.712212       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:24:38.820891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:24:38.921856       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:24:38.921893       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:24:38.921964       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:24:38.946971       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:24:38.947126       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:24:38.951432       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:24:38.951873       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:24:38.952079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:24:38.953661       1 config.go:200] "Starting service config controller"
	I1120 22:24:38.953741       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:24:38.953802       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:24:38.953832       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:24:38.953872       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:24:38.953899       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:24:38.954583       1 config.go:309] "Starting node config controller"
	I1120 22:24:38.957056       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:24:38.957147       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:24:39.054410       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:24:39.054469       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:24:39.054510       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [36dee8043ffdd9718af5219f4b2ba57016160270920a5c018383a11d2b8de499] <==
	I1120 22:24:26.378886       1 serving.go:386] Generated self-signed cert in-memory
	W1120 22:24:30.005840       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 22:24:30.005887       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 22:24:30.005898       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 22:24:30.005916       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 22:24:30.077486       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:24:30.077530       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:24:30.079854       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:24:30.079897       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:24:30.089170       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:24:30.089366       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 22:24:30.112172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1120 22:24:31.281456       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:24:36 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:36.266015    1335 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 22:24:36 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:36.267391    1335 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: E1120 22:24:37.360011    1335 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-559701\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-559701' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:37.398687    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b967db15-fd3c-4e36-939e-20736efe8c42-xtables-lock\") pod \"kube-proxy-q6lq4\" (UID: \"b967db15-fd3c-4e36-939e-20736efe8c42\") " pod="kube-system/kube-proxy-q6lq4"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:37.398793    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f-xtables-lock\") pod \"kindnet-4g2sr\" (UID: \"d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f\") " pod="kube-system/kindnet-4g2sr"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:37.398818    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f-cni-cfg\") pod \"kindnet-4g2sr\" (UID: \"d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f\") " pod="kube-system/kindnet-4g2sr"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:37.398876    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f-lib-modules\") pod \"kindnet-4g2sr\" (UID: \"d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f\") " pod="kube-system/kindnet-4g2sr"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:37.398895    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b967db15-fd3c-4e36-939e-20736efe8c42-kube-proxy\") pod \"kube-proxy-q6lq4\" (UID: \"b967db15-fd3c-4e36-939e-20736efe8c42\") " pod="kube-system/kube-proxy-q6lq4"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:37.398915    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz98l\" (UniqueName: \"kubernetes.io/projected/d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f-kube-api-access-gz98l\") pod \"kindnet-4g2sr\" (UID: \"d45fbf01-84b4-4af9-ac70-7d2c36c0aa4f\") " pod="kube-system/kindnet-4g2sr"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:37.399005    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b967db15-fd3c-4e36-939e-20736efe8c42-lib-modules\") pod \"kube-proxy-q6lq4\" (UID: \"b967db15-fd3c-4e36-939e-20736efe8c42\") " pod="kube-system/kube-proxy-q6lq4"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:37.399039    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5wh6\" (UniqueName: \"kubernetes.io/projected/b967db15-fd3c-4e36-939e-20736efe8c42-kube-api-access-j5wh6\") pod \"kube-proxy-q6lq4\" (UID: \"b967db15-fd3c-4e36-939e-20736efe8c42\") " pod="kube-system/kube-proxy-q6lq4"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:37.556965    1335 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 22:24:37 default-k8s-diff-port-559701 kubelet[1335]: W1120 22:24:37.688715    1335 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/crio-59b4efe72410348e40cc432f97b114f5eefaf5be9385c78eaabee824d5cc5e76 WatchSource:0}: Error finding container 59b4efe72410348e40cc432f97b114f5eefaf5be9385c78eaabee824d5cc5e76: Status 404 returned error can't find the container with id 59b4efe72410348e40cc432f97b114f5eefaf5be9385c78eaabee824d5cc5e76
	Nov 20 22:24:38 default-k8s-diff-port-559701 kubelet[1335]: W1120 22:24:38.565299    1335 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/crio-505b2ecf501d31cd595513b69df768d69a5b9a55948a21d51b8b0d7fc47d84bf WatchSource:0}: Error finding container 505b2ecf501d31cd595513b69df768d69a5b9a55948a21d51b8b0d7fc47d84bf: Status 404 returned error can't find the container with id 505b2ecf501d31cd595513b69df768d69a5b9a55948a21d51b8b0d7fc47d84bf
	Nov 20 22:24:38 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:38.797859    1335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4g2sr" podStartSLOduration=1.797838838 podStartE2EDuration="1.797838838s" podCreationTimestamp="2025-11-20 22:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:24:38.774637186 +0000 UTC m=+7.724661211" watchObservedRunningTime="2025-11-20 22:24:38.797838838 +0000 UTC m=+7.747862863"
	Nov 20 22:24:38 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:24:38.829290    1335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q6lq4" podStartSLOduration=1.82927171 podStartE2EDuration="1.82927171s" podCreationTimestamp="2025-11-20 22:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:24:38.800425801 +0000 UTC m=+7.750449826" watchObservedRunningTime="2025-11-20 22:24:38.82927171 +0000 UTC m=+7.779295735"
	Nov 20 22:25:18 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:25:18.717537    1335 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 22:25:18 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:25:18.809850    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdcp5\" (UniqueName: \"kubernetes.io/projected/4edff332-d2b5-4acb-b661-f55dab3a9af5-kube-api-access-bdcp5\") pod \"storage-provisioner\" (UID: \"4edff332-d2b5-4acb-b661-f55dab3a9af5\") " pod="kube-system/storage-provisioner"
	Nov 20 22:25:18 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:25:18.810064    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de537859-5578-4115-9ba0-2986fae8dd40-config-volume\") pod \"coredns-66bc5c9577-kdh8n\" (UID: \"de537859-5578-4115-9ba0-2986fae8dd40\") " pod="kube-system/coredns-66bc5c9577-kdh8n"
	Nov 20 22:25:18 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:25:18.810106    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4edff332-d2b5-4acb-b661-f55dab3a9af5-tmp\") pod \"storage-provisioner\" (UID: \"4edff332-d2b5-4acb-b661-f55dab3a9af5\") " pod="kube-system/storage-provisioner"
	Nov 20 22:25:18 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:25:18.810129    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5lz\" (UniqueName: \"kubernetes.io/projected/de537859-5578-4115-9ba0-2986fae8dd40-kube-api-access-nf5lz\") pod \"coredns-66bc5c9577-kdh8n\" (UID: \"de537859-5578-4115-9ba0-2986fae8dd40\") " pod="kube-system/coredns-66bc5c9577-kdh8n"
	Nov 20 22:25:19 default-k8s-diff-port-559701 kubelet[1335]: W1120 22:25:19.128905    1335 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/crio-40e5f5b5db7716d60d0d605685365dcdcf0075c183e694ce9d32c949d4e78fe4 WatchSource:0}: Error finding container 40e5f5b5db7716d60d0d605685365dcdcf0075c183e694ce9d32c949d4e78fe4: Status 404 returned error can't find the container with id 40e5f5b5db7716d60d0d605685365dcdcf0075c183e694ce9d32c949d4e78fe4
	Nov 20 22:25:19 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:25:19.886470    1335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kdh8n" podStartSLOduration=42.88644907 podStartE2EDuration="42.88644907s" podCreationTimestamp="2025-11-20 22:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:25:19.86093675 +0000 UTC m=+48.810960783" watchObservedRunningTime="2025-11-20 22:25:19.88644907 +0000 UTC m=+48.836473095"
	Nov 20 22:25:19 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:25:19.908821    1335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.908801226 podStartE2EDuration="41.908801226s" podCreationTimestamp="2025-11-20 22:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:25:19.889557549 +0000 UTC m=+48.839581582" watchObservedRunningTime="2025-11-20 22:25:19.908801226 +0000 UTC m=+48.858825243"
	Nov 20 22:25:22 default-k8s-diff-port-559701 kubelet[1335]: I1120 22:25:22.138403    1335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cktbc\" (UniqueName: \"kubernetes.io/projected/4f7c04d2-4cac-444d-82be-6529560dd56c-kube-api-access-cktbc\") pod \"busybox\" (UID: \"4f7c04d2-4cac-444d-82be-6529560dd56c\") " pod="default/busybox"
	
	
	==> storage-provisioner [d555ebf51eb8dfeb33425b7cfe8b6dc73f75351b595c514d7b5c949a1646d2f5] <==
	I1120 22:25:19.168419       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 22:25:19.185123       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:25:19.185177       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 22:25:19.190547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:19.196680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:25:19.197017       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:25:19.197180       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-559701_df12504e-4570-414a-aa29-3e46414be4ef!
	I1120 22:25:19.199039       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69a307c5-854a-4ffe-8ac7-a9f82ffd8d45", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-559701_df12504e-4570-414a-aa29-3e46414be4ef became leader
	W1120 22:25:19.204132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:19.208225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:25:19.298146       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-559701_df12504e-4570-414a-aa29-3e46414be4ef!
	W1120 22:25:21.211091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:21.215846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:23.218649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:23.223672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:25.227215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:25.231975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:27.236556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:27.241245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:29.244581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:29.248913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:31.253091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:31.258654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:33.264701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:33.272794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-559701 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (346.974037ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:26:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-270206 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-270206 describe deploy/metrics-server -n kube-system: exit status 1 (119.733954ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-270206 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-270206
helpers_test.go:243: (dbg) docker inspect embed-certs-270206:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd",
	        "Created": "2025-11-20T22:24:33.33301512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1028403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:24:33.404596529Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/hosts",
	        "LogPath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd-json.log",
	        "Name": "/embed-certs-270206",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-270206:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-270206",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd",
	                "LowerDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-270206",
	                "Source": "/var/lib/docker/volumes/embed-certs-270206/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-270206",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-270206",
	                "name.minikube.sigs.k8s.io": "embed-certs-270206",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e58da43f53ecf4200d07db13e93dddefa66bb0e2b11fda793bd35801e03383d7",
	            "SandboxKey": "/var/run/docker/netns/e58da43f53ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34172"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34173"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34176"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34174"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34175"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-270206": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:37:d5:37:67:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ffd59b794c532505e054cacac90fc1087646ff0df0b0ac27f388edeea26b442",
	                    "EndpointID": "072af17bc2f01fad6aec4460c3e1a4b81f732a42c52d915f581a163b0ba23e2f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-270206",
	                        "155df8ef967b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270206 -n embed-certs-270206
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-270206 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-270206 logs -n 25: (1.501923061s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-410652                                                                                                                                                                                                                  │ kubernetes-upgrade-410652    │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ delete  │ -p force-systemd-env-833370                                                                                                                                                                                                                   │ force-systemd-env-833370     │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:20 UTC │
	│ start   │ -p cert-options-961311 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:20 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ cert-options-961311 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ ssh     │ -p cert-options-961311 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ delete  │ -p cert-options-961311                                                                                                                                                                                                                        │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │                     │
	│ stop    │ -p old-k8s-version-443192 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-443192 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:23 UTC │
	│ image   │ old-k8s-version-443192 image list --format=json                                                                                                                                                                                               │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ pause   │ -p old-k8s-version-443192 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │                     │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:24 UTC │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:25 UTC │
	│ delete  │ -p cert-expiration-420078                                                                                                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:24 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-559701 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:25:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:25:46.143818 1031720 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:25:46.143946 1031720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:25:46.143956 1031720 out.go:374] Setting ErrFile to fd 2...
	I1120 22:25:46.143961 1031720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:25:46.144226 1031720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:25:46.144593 1031720 out.go:368] Setting JSON to false
	I1120 22:25:46.145552 1031720 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18472,"bootTime":1763659075,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:25:46.145625 1031720 start.go:143] virtualization:  
	I1120 22:25:46.148789 1031720 out.go:179] * [default-k8s-diff-port-559701] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:25:46.152794 1031720 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:25:46.152865 1031720 notify.go:221] Checking for updates...
	I1120 22:25:46.160030 1031720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:25:46.162899 1031720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:25:46.165699 1031720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:25:46.168535 1031720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:25:46.171554 1031720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:25:46.175184 1031720 config.go:182] Loaded profile config "default-k8s-diff-port-559701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:25:46.175896 1031720 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:25:46.210650 1031720 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:25:46.210766 1031720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:25:46.270234 1031720 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:25:46.259585189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:25:46.270339 1031720 docker.go:319] overlay module found
	I1120 22:25:46.273614 1031720 out.go:179] * Using the docker driver based on existing profile
	I1120 22:25:46.276522 1031720 start.go:309] selected driver: docker
	I1120 22:25:46.276548 1031720 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-559701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-559701 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:25:46.276644 1031720 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:25:46.277501 1031720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:25:46.332964 1031720 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:25:46.323515488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:25:46.333379 1031720 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:25:46.333417 1031720 cni.go:84] Creating CNI manager for ""
	I1120 22:25:46.333472 1031720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:25:46.333522 1031720 start.go:353] cluster config:
	{Name:default-k8s-diff-port-559701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-559701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:25:46.336715 1031720 out.go:179] * Starting "default-k8s-diff-port-559701" primary control-plane node in "default-k8s-diff-port-559701" cluster
	I1120 22:25:46.339561 1031720 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:25:46.342554 1031720 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:25:46.345462 1031720 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:25:46.345520 1031720 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:25:46.345539 1031720 cache.go:65] Caching tarball of preloaded images
	I1120 22:25:46.345554 1031720 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:25:46.345633 1031720 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:25:46.345644 1031720 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:25:46.345758 1031720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/config.json ...
	I1120 22:25:46.373683 1031720 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:25:46.373707 1031720 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:25:46.373726 1031720 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:25:46.373750 1031720 start.go:360] acquireMachinesLock for default-k8s-diff-port-559701: {Name:mk900dead38e18fa51f96a06db851d1766aa25c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:25:46.373820 1031720 start.go:364] duration metric: took 46.359µs to acquireMachinesLock for "default-k8s-diff-port-559701"
	I1120 22:25:46.373843 1031720 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:25:46.373851 1031720 fix.go:54] fixHost starting: 
	I1120 22:25:46.374121 1031720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:25:46.395150 1031720 fix.go:112] recreateIfNeeded on default-k8s-diff-port-559701: state=Stopped err=<nil>
	W1120 22:25:46.395181 1031720 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 22:25:43.788473 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	W1120 22:25:45.788571 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	I1120 22:25:46.398379 1031720 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-559701" ...
	I1120 22:25:46.398472 1031720 cli_runner.go:164] Run: docker start default-k8s-diff-port-559701
	I1120 22:25:46.648472 1031720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:25:46.672975 1031720 kic.go:430] container "default-k8s-diff-port-559701" state is running.
	I1120 22:25:46.673619 1031720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-559701
	I1120 22:25:46.696744 1031720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/config.json ...
	I1120 22:25:46.697328 1031720 machine.go:94] provisionDockerMachine start ...
	I1120 22:25:46.697416 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:46.721952 1031720 main.go:143] libmachine: Using SSH client type: native
	I1120 22:25:46.722319 1031720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1120 22:25:46.722329 1031720 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:25:46.723026 1031720 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:25:49.874634 1031720 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-559701
	
	I1120 22:25:49.874662 1031720 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-559701"
	I1120 22:25:49.874732 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:49.894065 1031720 main.go:143] libmachine: Using SSH client type: native
	I1120 22:25:49.894480 1031720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1120 22:25:49.894502 1031720 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-559701 && echo "default-k8s-diff-port-559701" | sudo tee /etc/hostname
	I1120 22:25:50.054135 1031720 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-559701
	
	I1120 22:25:50.054227 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:50.073615 1031720 main.go:143] libmachine: Using SSH client type: native
	I1120 22:25:50.073976 1031720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1120 22:25:50.074001 1031720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-559701' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-559701/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-559701' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:25:50.223600 1031720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:25:50.223624 1031720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:25:50.223648 1031720 ubuntu.go:190] setting up certificates
	I1120 22:25:50.223658 1031720 provision.go:84] configureAuth start
	I1120 22:25:50.223724 1031720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-559701
	I1120 22:25:50.241804 1031720 provision.go:143] copyHostCerts
	I1120 22:25:50.241878 1031720 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:25:50.241896 1031720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:25:50.241975 1031720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:25:50.242077 1031720 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:25:50.242082 1031720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:25:50.242108 1031720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:25:50.242165 1031720 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:25:50.242170 1031720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:25:50.242192 1031720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:25:50.242245 1031720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-559701 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-559701 localhost minikube]
	I1120 22:25:50.993894 1031720 provision.go:177] copyRemoteCerts
	I1120 22:25:50.993973 1031720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:25:50.994016 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:51.018902 1031720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:25:51.128061 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	W1120 22:25:48.287653 1027933 node_ready.go:57] node "embed-certs-270206" has "Ready":"False" status (will retry)
	I1120 22:25:50.291723 1027933 node_ready.go:49] node "embed-certs-270206" is "Ready"
	I1120 22:25:50.291758 1027933 node_ready.go:38] duration metric: took 40.007032076s for node "embed-certs-270206" to be "Ready" ...
	I1120 22:25:50.291772 1027933 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:25:50.291837 1027933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:25:50.307463 1027933 api_server.go:72] duration metric: took 41.270753687s to wait for apiserver process to appear ...
	I1120 22:25:50.307487 1027933 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:25:50.307506 1027933 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1120 22:25:50.325679 1027933 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1120 22:25:50.332140 1027933 api_server.go:141] control plane version: v1.34.1
	I1120 22:25:50.332167 1027933 api_server.go:131] duration metric: took 24.672851ms to wait for apiserver health ...
	I1120 22:25:50.332176 1027933 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:25:50.349463 1027933 system_pods.go:59] 8 kube-system pods found
	I1120 22:25:50.349493 1027933 system_pods.go:61] "coredns-66bc5c9577-c5cg5" [42c2a518-d0e5-4c59-9710-7b624f63c38c] Pending
	I1120 22:25:50.349499 1027933 system_pods.go:61] "etcd-embed-certs-270206" [5e65bc97-d5f1-43e1-98a3-e9fbf1523362] Running
	I1120 22:25:50.349503 1027933 system_pods.go:61] "kindnet-9sqjv" [1d0771a4-278b-44eb-a563-ab815df51728] Running
	I1120 22:25:50.349508 1027933 system_pods.go:61] "kube-apiserver-embed-certs-270206" [86e699be-1798-428d-a223-8682e8ddfd6d] Running
	I1120 22:25:50.349512 1027933 system_pods.go:61] "kube-controller-manager-embed-certs-270206" [afe1bea4-7588-46af-8287-363bad438880] Running
	I1120 22:25:50.349516 1027933 system_pods.go:61] "kube-proxy-9d84b" [372ec000-a084-43d1-ac94-5cb64204ba40] Running
	I1120 22:25:50.349521 1027933 system_pods.go:61] "kube-scheduler-embed-certs-270206" [ab91a905-69f6-42ce-98a7-b166339a6d6e] Running
	I1120 22:25:50.349525 1027933 system_pods.go:61] "storage-provisioner" [276e2ed3-8832-46cb-baf7-6accd2f37e27] Pending
	I1120 22:25:50.349531 1027933 system_pods.go:74] duration metric: took 17.348711ms to wait for pod list to return data ...
	I1120 22:25:50.349539 1027933 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:25:50.356669 1027933 default_sa.go:45] found service account: "default"
	I1120 22:25:50.356746 1027933 default_sa.go:55] duration metric: took 7.201134ms for default service account to be created ...
	I1120 22:25:50.356785 1027933 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:25:50.365636 1027933 system_pods.go:86] 8 kube-system pods found
	I1120 22:25:50.365672 1027933 system_pods.go:89] "coredns-66bc5c9577-c5cg5" [42c2a518-d0e5-4c59-9710-7b624f63c38c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:25:50.365679 1027933 system_pods.go:89] "etcd-embed-certs-270206" [5e65bc97-d5f1-43e1-98a3-e9fbf1523362] Running
	I1120 22:25:50.365685 1027933 system_pods.go:89] "kindnet-9sqjv" [1d0771a4-278b-44eb-a563-ab815df51728] Running
	I1120 22:25:50.365689 1027933 system_pods.go:89] "kube-apiserver-embed-certs-270206" [86e699be-1798-428d-a223-8682e8ddfd6d] Running
	I1120 22:25:50.365693 1027933 system_pods.go:89] "kube-controller-manager-embed-certs-270206" [afe1bea4-7588-46af-8287-363bad438880] Running
	I1120 22:25:50.365698 1027933 system_pods.go:89] "kube-proxy-9d84b" [372ec000-a084-43d1-ac94-5cb64204ba40] Running
	I1120 22:25:50.365701 1027933 system_pods.go:89] "kube-scheduler-embed-certs-270206" [ab91a905-69f6-42ce-98a7-b166339a6d6e] Running
	I1120 22:25:50.365707 1027933 system_pods.go:89] "storage-provisioner" [276e2ed3-8832-46cb-baf7-6accd2f37e27] Pending
	I1120 22:25:50.365726 1027933 retry.go:31] will retry after 221.152442ms: missing components: kube-dns
	I1120 22:25:50.614095 1027933 system_pods.go:86] 8 kube-system pods found
	I1120 22:25:50.614144 1027933 system_pods.go:89] "coredns-66bc5c9577-c5cg5" [42c2a518-d0e5-4c59-9710-7b624f63c38c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:25:50.614155 1027933 system_pods.go:89] "etcd-embed-certs-270206" [5e65bc97-d5f1-43e1-98a3-e9fbf1523362] Running
	I1120 22:25:50.614162 1027933 system_pods.go:89] "kindnet-9sqjv" [1d0771a4-278b-44eb-a563-ab815df51728] Running
	I1120 22:25:50.614166 1027933 system_pods.go:89] "kube-apiserver-embed-certs-270206" [86e699be-1798-428d-a223-8682e8ddfd6d] Running
	I1120 22:25:50.614172 1027933 system_pods.go:89] "kube-controller-manager-embed-certs-270206" [afe1bea4-7588-46af-8287-363bad438880] Running
	I1120 22:25:50.614184 1027933 system_pods.go:89] "kube-proxy-9d84b" [372ec000-a084-43d1-ac94-5cb64204ba40] Running
	I1120 22:25:50.614192 1027933 system_pods.go:89] "kube-scheduler-embed-certs-270206" [ab91a905-69f6-42ce-98a7-b166339a6d6e] Running
	I1120 22:25:50.614198 1027933 system_pods.go:89] "storage-provisioner" [276e2ed3-8832-46cb-baf7-6accd2f37e27] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:25:50.614219 1027933 retry.go:31] will retry after 303.972196ms: missing components: kube-dns
	I1120 22:25:50.926022 1027933 system_pods.go:86] 8 kube-system pods found
	I1120 22:25:50.926063 1027933 system_pods.go:89] "coredns-66bc5c9577-c5cg5" [42c2a518-d0e5-4c59-9710-7b624f63c38c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:25:50.926070 1027933 system_pods.go:89] "etcd-embed-certs-270206" [5e65bc97-d5f1-43e1-98a3-e9fbf1523362] Running
	I1120 22:25:50.926076 1027933 system_pods.go:89] "kindnet-9sqjv" [1d0771a4-278b-44eb-a563-ab815df51728] Running
	I1120 22:25:50.926080 1027933 system_pods.go:89] "kube-apiserver-embed-certs-270206" [86e699be-1798-428d-a223-8682e8ddfd6d] Running
	I1120 22:25:50.926088 1027933 system_pods.go:89] "kube-controller-manager-embed-certs-270206" [afe1bea4-7588-46af-8287-363bad438880] Running
	I1120 22:25:50.926092 1027933 system_pods.go:89] "kube-proxy-9d84b" [372ec000-a084-43d1-ac94-5cb64204ba40] Running
	I1120 22:25:50.926096 1027933 system_pods.go:89] "kube-scheduler-embed-certs-270206" [ab91a905-69f6-42ce-98a7-b166339a6d6e] Running
	I1120 22:25:50.926102 1027933 system_pods.go:89] "storage-provisioner" [276e2ed3-8832-46cb-baf7-6accd2f37e27] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:25:50.926115 1027933 retry.go:31] will retry after 316.72413ms: missing components: kube-dns
	I1120 22:25:51.249215 1027933 system_pods.go:86] 8 kube-system pods found
	I1120 22:25:51.249253 1027933 system_pods.go:89] "coredns-66bc5c9577-c5cg5" [42c2a518-d0e5-4c59-9710-7b624f63c38c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:25:51.249261 1027933 system_pods.go:89] "etcd-embed-certs-270206" [5e65bc97-d5f1-43e1-98a3-e9fbf1523362] Running
	I1120 22:25:51.249267 1027933 system_pods.go:89] "kindnet-9sqjv" [1d0771a4-278b-44eb-a563-ab815df51728] Running
	I1120 22:25:51.249271 1027933 system_pods.go:89] "kube-apiserver-embed-certs-270206" [86e699be-1798-428d-a223-8682e8ddfd6d] Running
	I1120 22:25:51.249275 1027933 system_pods.go:89] "kube-controller-manager-embed-certs-270206" [afe1bea4-7588-46af-8287-363bad438880] Running
	I1120 22:25:51.249279 1027933 system_pods.go:89] "kube-proxy-9d84b" [372ec000-a084-43d1-ac94-5cb64204ba40] Running
	I1120 22:25:51.249283 1027933 system_pods.go:89] "kube-scheduler-embed-certs-270206" [ab91a905-69f6-42ce-98a7-b166339a6d6e] Running
	I1120 22:25:51.249287 1027933 system_pods.go:89] "storage-provisioner" [276e2ed3-8832-46cb-baf7-6accd2f37e27] Running
	I1120 22:25:51.249294 1027933 system_pods.go:126] duration metric: took 892.486564ms to wait for k8s-apps to be running ...
	I1120 22:25:51.249306 1027933 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:25:51.249363 1027933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:25:51.265225 1027933 system_svc.go:56] duration metric: took 15.907659ms WaitForService to wait for kubelet
	I1120 22:25:51.265253 1027933 kubeadm.go:587] duration metric: took 42.228549341s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:25:51.265277 1027933 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:25:51.268798 1027933 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:25:51.268836 1027933 node_conditions.go:123] node cpu capacity is 2
	I1120 22:25:51.268849 1027933 node_conditions.go:105] duration metric: took 3.565371ms to run NodePressure ...
	I1120 22:25:51.268862 1027933 start.go:242] waiting for startup goroutines ...
	I1120 22:25:51.268870 1027933 start.go:247] waiting for cluster config update ...
	I1120 22:25:51.268881 1027933 start.go:256] writing updated cluster config ...
	I1120 22:25:51.269182 1027933 ssh_runner.go:195] Run: rm -f paused
	I1120 22:25:51.276153 1027933 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:25:51.280285 1027933 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5cg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:51.150113 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:25:51.171790 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:25:51.191164 1031720 provision.go:87] duration metric: took 967.491819ms to configureAuth
	I1120 22:25:51.191195 1031720 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:25:51.191411 1031720 config.go:182] Loaded profile config "default-k8s-diff-port-559701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:25:51.191520 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:51.210205 1031720 main.go:143] libmachine: Using SSH client type: native
	I1120 22:25:51.210512 1031720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1120 22:25:51.210531 1031720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:25:51.580122 1031720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:25:51.580148 1031720 machine.go:97] duration metric: took 4.882806284s to provisionDockerMachine
	I1120 22:25:51.580168 1031720 start.go:293] postStartSetup for "default-k8s-diff-port-559701" (driver="docker")
	I1120 22:25:51.580198 1031720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:25:51.580297 1031720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:25:51.580370 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:51.601397 1031720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:25:51.711177 1031720 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:25:51.714844 1031720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:25:51.714871 1031720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:25:51.714881 1031720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:25:51.714940 1031720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:25:51.715050 1031720 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:25:51.715159 1031720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:25:51.723391 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:25:51.742274 1031720 start.go:296] duration metric: took 162.071459ms for postStartSetup
	I1120 22:25:51.742365 1031720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:25:51.742406 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:51.760223 1031720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:25:51.864226 1031720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:25:51.869811 1031720 fix.go:56] duration metric: took 5.495951926s for fixHost
	I1120 22:25:51.869833 1031720 start.go:83] releasing machines lock for "default-k8s-diff-port-559701", held for 5.496003291s
	I1120 22:25:51.869901 1031720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-559701
	I1120 22:25:51.887183 1031720 ssh_runner.go:195] Run: cat /version.json
	I1120 22:25:51.887237 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:51.887559 1031720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:25:51.887610 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:51.906889 1031720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:25:51.907117 1031720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:25:52.008398 1031720 ssh_runner.go:195] Run: systemctl --version
	I1120 22:25:52.108712 1031720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:25:52.154230 1031720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:25:52.158865 1031720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:25:52.158953 1031720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:25:52.168379 1031720 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:25:52.168403 1031720 start.go:496] detecting cgroup driver to use...
	I1120 22:25:52.168448 1031720 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:25:52.168504 1031720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:25:52.184423 1031720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:25:52.197637 1031720 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:25:52.197783 1031720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:25:52.214112 1031720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:25:52.227792 1031720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:25:52.368788 1031720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:25:52.496349 1031720 docker.go:234] disabling docker service ...
	I1120 22:25:52.496437 1031720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:25:52.513273 1031720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:25:52.527195 1031720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:25:52.647962 1031720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:25:52.786306 1031720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:25:52.799734 1031720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:25:52.816739 1031720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:25:52.816872 1031720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:25:52.826235 1031720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:25:52.826339 1031720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:25:52.843733 1031720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:25:52.853019 1031720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:25:52.862612 1031720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:25:52.880873 1031720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:25:52.892620 1031720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:25:52.910995 1031720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:25:52.922256 1031720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:25:52.930568 1031720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:25:52.939093 1031720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:25:53.075599 1031720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:25:53.255669 1031720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:25:53.255754 1031720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:25:53.260380 1031720 start.go:564] Will wait 60s for crictl version
	I1120 22:25:53.260461 1031720 ssh_runner.go:195] Run: which crictl
	I1120 22:25:53.266023 1031720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:25:53.293685 1031720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:25:53.293775 1031720 ssh_runner.go:195] Run: crio --version
	I1120 22:25:53.323369 1031720 ssh_runner.go:195] Run: crio --version
	I1120 22:25:53.363362 1031720 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:25:52.287936 1027933 pod_ready.go:94] pod "coredns-66bc5c9577-c5cg5" is "Ready"
	I1120 22:25:52.287962 1027933 pod_ready.go:86] duration metric: took 1.00764564s for pod "coredns-66bc5c9577-c5cg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:52.299879 1027933 pod_ready.go:83] waiting for pod "etcd-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:52.309142 1027933 pod_ready.go:94] pod "etcd-embed-certs-270206" is "Ready"
	I1120 22:25:52.309168 1027933 pod_ready.go:86] duration metric: took 9.26384ms for pod "etcd-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:52.312153 1027933 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:52.317892 1027933 pod_ready.go:94] pod "kube-apiserver-embed-certs-270206" is "Ready"
	I1120 22:25:52.317915 1027933 pod_ready.go:86] duration metric: took 5.740693ms for pod "kube-apiserver-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:52.320750 1027933 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:52.485488 1027933 pod_ready.go:94] pod "kube-controller-manager-embed-certs-270206" is "Ready"
	I1120 22:25:52.485565 1027933 pod_ready.go:86] duration metric: took 164.745832ms for pod "kube-controller-manager-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:52.685172 1027933 pod_ready.go:83] waiting for pod "kube-proxy-9d84b" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:53.086292 1027933 pod_ready.go:94] pod "kube-proxy-9d84b" is "Ready"
	I1120 22:25:53.086390 1027933 pod_ready.go:86] duration metric: took 401.186042ms for pod "kube-proxy-9d84b" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:53.285516 1027933 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:53.684818 1027933 pod_ready.go:94] pod "kube-scheduler-embed-certs-270206" is "Ready"
	I1120 22:25:53.684842 1027933 pod_ready.go:86] duration metric: took 399.252085ms for pod "kube-scheduler-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:25:53.684855 1027933 pod_ready.go:40] duration metric: took 2.408665202s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:25:53.779099 1027933 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:25:53.782671 1027933 out.go:179] * Done! kubectl is now configured to use "embed-certs-270206" cluster and "default" namespace by default
	I1120 22:25:53.366386 1031720 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-559701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:25:53.383390 1031720 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:25:53.387314 1031720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:25:53.397320 1031720 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-559701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-559701 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:25:53.397460 1031720 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:25:53.397512 1031720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:25:53.433448 1031720 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:25:53.433473 1031720 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:25:53.433536 1031720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:25:53.463487 1031720 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:25:53.463513 1031720 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:25:53.463521 1031720 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1120 22:25:53.463615 1031720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-559701 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-559701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:25:53.463713 1031720 ssh_runner.go:195] Run: crio config
	I1120 22:25:53.536157 1031720 cni.go:84] Creating CNI manager for ""
	I1120 22:25:53.536182 1031720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:25:53.536203 1031720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:25:53.536236 1031720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-559701 NodeName:default-k8s-diff-port-559701 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:25:53.536382 1031720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-559701"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:25:53.536473 1031720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:25:53.544589 1031720 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:25:53.544732 1031720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:25:53.552529 1031720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1120 22:25:53.565807 1031720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:25:53.579647 1031720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1120 22:25:53.593204 1031720 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:25:53.597141 1031720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:25:53.607172 1031720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:25:53.733548 1031720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:25:53.755169 1031720 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701 for IP: 192.168.85.2
	I1120 22:25:53.755190 1031720 certs.go:195] generating shared ca certs ...
	I1120 22:25:53.755205 1031720 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:25:53.755359 1031720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:25:53.755404 1031720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:25:53.755410 1031720 certs.go:257] generating profile certs ...
	I1120 22:25:53.755502 1031720 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.key
	I1120 22:25:53.755574 1031720 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/apiserver.key.40d9f2a6
	I1120 22:25:53.755655 1031720 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/proxy-client.key
	I1120 22:25:53.755828 1031720 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:25:53.755965 1031720 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:25:53.755984 1031720 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:25:53.756042 1031720 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:25:53.756090 1031720 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:25:53.756118 1031720 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:25:53.756186 1031720 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:25:53.756830 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:25:53.871607 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:25:53.917700 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:25:53.979265 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:25:54.019454 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 22:25:54.064750 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:25:54.122452 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:25:54.148949 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 22:25:54.173170 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:25:54.200453 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:25:54.220879 1031720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:25:54.241328 1031720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:25:54.255859 1031720 ssh_runner.go:195] Run: openssl version
	I1120 22:25:54.264969 1031720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:25:54.273402 1031720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:25:54.282352 1031720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:25:54.286520 1031720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:25:54.286622 1031720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:25:54.334701 1031720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:25:54.343151 1031720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:25:54.351084 1031720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:25:54.359568 1031720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:25:54.363825 1031720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:25:54.363895 1031720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:25:54.405072 1031720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:25:54.415368 1031720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:25:54.428186 1031720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:25:54.436073 1031720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:25:54.440333 1031720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:25:54.440406 1031720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:25:54.485118 1031720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:25:54.494108 1031720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:25:54.499158 1031720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:25:54.542605 1031720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:25:54.595136 1031720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:25:54.655492 1031720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:25:54.725694 1031720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:25:54.797899 1031720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:25:54.894091 1031720 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-559701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-559701 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:25:54.894181 1031720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:25:54.894264 1031720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:25:54.972472 1031720 cri.go:89] found id: "5a6629b69c5e0d8e000cdd414ba97d90c5b7a7e59914d41eb655c3968aad1a0c"
	I1120 22:25:54.972496 1031720 cri.go:89] found id: "f420a3f656763afb77ad4591b661d794b5ba1e728742d94c9f2a35b5d946b367"
	I1120 22:25:54.972502 1031720 cri.go:89] found id: "24e3b3c58fa5dc48ddc4f9d5406e8ee808c9a30a31a0509d6f7eacbc5ebb4a41"
	I1120 22:25:54.972515 1031720 cri.go:89] found id: "1d71c5df1fe3fb7bc49ab400af58339d6f0dbb2f7f20480e8fca0999b681c9bb"
	I1120 22:25:54.972518 1031720 cri.go:89] found id: ""
	I1120 22:25:54.972575 1031720 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:25:54.994192 1031720 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:25:54Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:25:54.994277 1031720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:25:55.014664 1031720 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:25:55.014690 1031720 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:25:55.014755 1031720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:25:55.030651 1031720 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:25:55.031547 1031720 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-559701" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:25:55.032103 1031720 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-559701" cluster setting kubeconfig missing "default-k8s-diff-port-559701" context setting]
	I1120 22:25:55.032953 1031720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:25:55.034690 1031720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:25:55.051160 1031720 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1120 22:25:55.051196 1031720 kubeadm.go:602] duration metric: took 36.498636ms to restartPrimaryControlPlane
	I1120 22:25:55.051206 1031720 kubeadm.go:403] duration metric: took 157.125065ms to StartCluster
	I1120 22:25:55.051221 1031720 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:25:55.051294 1031720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:25:55.052816 1031720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:25:55.053079 1031720 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:25:55.053497 1031720 config.go:182] Loaded profile config "default-k8s-diff-port-559701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:25:55.053552 1031720 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:25:55.053651 1031720 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-559701"
	I1120 22:25:55.053671 1031720 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-559701"
	W1120 22:25:55.053683 1031720 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:25:55.053705 1031720 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-559701"
	I1120 22:25:55.053717 1031720 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-559701"
	W1120 22:25:55.053727 1031720 addons.go:248] addon dashboard should already be in state true
	I1120 22:25:55.053749 1031720 host.go:66] Checking if "default-k8s-diff-port-559701" exists ...
	I1120 22:25:55.054252 1031720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:25:55.054428 1031720 host.go:66] Checking if "default-k8s-diff-port-559701" exists ...
	I1120 22:25:55.054795 1031720 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-559701"
	I1120 22:25:55.054813 1031720 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-559701"
	I1120 22:25:55.055132 1031720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:25:55.055435 1031720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:25:55.058788 1031720 out.go:179] * Verifying Kubernetes components...
	I1120 22:25:55.061786 1031720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:25:55.118196 1031720 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-559701"
	W1120 22:25:55.118220 1031720 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:25:55.118247 1031720 host.go:66] Checking if "default-k8s-diff-port-559701" exists ...
	I1120 22:25:55.118656 1031720 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:25:55.127139 1031720 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:25:55.127299 1031720 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:25:55.131666 1031720 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:25:55.131845 1031720 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:25:55.131866 1031720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:25:55.131930 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:55.134614 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:25:55.134646 1031720 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:25:55.134715 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:55.158144 1031720 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:25:55.158166 1031720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:25:55.158234 1031720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:25:55.223411 1031720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:25:55.224127 1031720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:25:55.229420 1031720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:25:55.465867 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:25:55.465894 1031720 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:25:55.475112 1031720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:25:55.483903 1031720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:25:55.518948 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:25:55.519039 1031720 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:25:55.550119 1031720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:25:55.571376 1031720 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-559701" to be "Ready" ...
	I1120 22:25:55.605035 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:25:55.605059 1031720 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:25:55.680172 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:25:55.680249 1031720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:25:55.765748 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:25:55.765815 1031720 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:25:55.804805 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:25:55.804888 1031720 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:25:55.828642 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:25:55.828726 1031720 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:25:55.866681 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:25:55.866758 1031720 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:25:55.898236 1031720 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:25:55.898316 1031720 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:25:55.925202 1031720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:26:00.770110 1031720 node_ready.go:49] node "default-k8s-diff-port-559701" is "Ready"
	I1120 22:26:00.770152 1031720 node_ready.go:38] duration metric: took 5.198632331s for node "default-k8s-diff-port-559701" to be "Ready" ...
	I1120 22:26:00.770166 1031720 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:26:00.770231 1031720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:26:02.822638 1031720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.272414488s)
	I1120 22:26:02.822735 1031720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.338755772s)
	I1120 22:26:02.822861 1031720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.897552141s)
	I1120 22:26:02.822928 1031720 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.052681094s)
	I1120 22:26:02.822944 1031720 api_server.go:72] duration metric: took 7.769832118s to wait for apiserver process to appear ...
	I1120 22:26:02.822950 1031720 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:26:02.822982 1031720 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 22:26:02.826354 1031720 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-559701 addons enable metrics-server
	
	I1120 22:26:02.852142 1031720 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1120 22:26:02.857370 1031720 api_server.go:141] control plane version: v1.34.1
	I1120 22:26:02.857406 1031720 api_server.go:131] duration metric: took 34.449395ms to wait for apiserver health ...
	I1120 22:26:02.857415 1031720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:26:02.870285 1031720 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Nov 20 22:25:50 embed-certs-270206 crio[837]: time="2025-11-20T22:25:50.808892078Z" level=info msg="Created container df3c353a5ffde0cf99dac3dac93e2225b87f23d35721714677a2cda31ad5292b: kube-system/coredns-66bc5c9577-c5cg5/coredns" id=04e0e621-eb6d-4d4a-a0f4-217ef3e81670 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:25:50 embed-certs-270206 crio[837]: time="2025-11-20T22:25:50.80977337Z" level=info msg="Starting container: df3c353a5ffde0cf99dac3dac93e2225b87f23d35721714677a2cda31ad5292b" id=414b4adb-9679-4287-9640-d8120b176c93 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:25:50 embed-certs-270206 crio[837]: time="2025-11-20T22:25:50.816665659Z" level=info msg="Started container" PID=1803 containerID=df3c353a5ffde0cf99dac3dac93e2225b87f23d35721714677a2cda31ad5292b description=kube-system/coredns-66bc5c9577-c5cg5/coredns id=414b4adb-9679-4287-9640-d8120b176c93 name=/runtime.v1.RuntimeService/StartContainer sandboxID=df542c3756c90fc9a342bcdfe9cffec26af85d54eb12f84481ace84246ca5820
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.40982891Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a9a1c43e-9988-40d3-8a66-24e84a4b4d99 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.40995454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.417639521Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5395a91b529c381df209ac86d943b73a5462d9a8ed86aeab0fc7f73e6fb88545 UID:6afd63b7-397f-4631-b006-dd708750d125 NetNS:/var/run/netns/3e74812d-89ff-4a17-b56c-eabb9afe7943 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079028}] Aliases:map[]}"
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.417809304Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.431793262Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5395a91b529c381df209ac86d943b73a5462d9a8ed86aeab0fc7f73e6fb88545 UID:6afd63b7-397f-4631-b006-dd708750d125 NetNS:/var/run/netns/3e74812d-89ff-4a17-b56c-eabb9afe7943 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079028}] Aliases:map[]}"
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.432080125Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.442014038Z" level=info msg="Ran pod sandbox 5395a91b529c381df209ac86d943b73a5462d9a8ed86aeab0fc7f73e6fb88545 with infra container: default/busybox/POD" id=a9a1c43e-9988-40d3-8a66-24e84a4b4d99 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.443319244Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e48d406e-5c58-45fb-9eab-0a4fac9bd67c name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.44361958Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e48d406e-5c58-45fb-9eab-0a4fac9bd67c name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.443725797Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e48d406e-5c58-45fb-9eab-0a4fac9bd67c name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.447120491Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7a9d79eb-80d8-4100-a130-47b4781efd26 name=/runtime.v1.ImageService/PullImage
	Nov 20 22:25:54 embed-certs-270206 crio[837]: time="2025-11-20T22:25:54.451420042Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.61358042Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=7a9d79eb-80d8-4100-a130-47b4781efd26 name=/runtime.v1.ImageService/PullImage
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.614907017Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=37beb73b-788e-445c-a6c3-8d8983b711fb name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.61688964Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a84e5bdd-20f1-498a-8d98-d73fac7b815a name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.622204826Z" level=info msg="Creating container: default/busybox/busybox" id=e452470a-a973-494c-9c23-334de1339dd7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.622334214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.627424635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.627905888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.674263629Z" level=info msg="Created container 6beb859a3ebf76c681c85419aa4ddc32d38fc9b0a76fc02775b86a21dcfe1dc4: default/busybox/busybox" id=e452470a-a973-494c-9c23-334de1339dd7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.677704567Z" level=info msg="Starting container: 6beb859a3ebf76c681c85419aa4ddc32d38fc9b0a76fc02775b86a21dcfe1dc4" id=014fddc2-ac18-423e-823a-acb9b4b481d4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:25:56 embed-certs-270206 crio[837]: time="2025-11-20T22:25:56.679403059Z" level=info msg="Started container" PID=1855 containerID=6beb859a3ebf76c681c85419aa4ddc32d38fc9b0a76fc02775b86a21dcfe1dc4 description=default/busybox/busybox id=014fddc2-ac18-423e-823a-acb9b4b481d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5395a91b529c381df209ac86d943b73a5462d9a8ed86aeab0fc7f73e6fb88545
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	6beb859a3ebf7       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   5395a91b529c3       busybox                                      default
	df3c353a5ffde       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   df542c3756c90       coredns-66bc5c9577-c5cg5                     kube-system
	df1548e3e73c7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   d5dc68fb05ee0       storage-provisioner                          kube-system
	6dd87b72dc6d5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   075f5efdfcee5       kube-proxy-9d84b                             kube-system
	0abb1afea224d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   6d20dfc0615ef       kindnet-9sqjv                                kube-system
	ee79e994252f6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   b4118861efebc       kube-controller-manager-embed-certs-270206   kube-system
	676e9210a7e96       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   25115cfd8bf9e       etcd-embed-certs-270206                      kube-system
	d7c086bb8bc58       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   702c4f72695e8       kube-scheduler-embed-certs-270206            kube-system
	768d9e7ed6fe2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   735f3330d4567       kube-apiserver-embed-certs-270206            kube-system
	
	
	==> coredns [df3c353a5ffde0cf99dac3dac93e2225b87f23d35721714677a2cda31ad5292b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36410 - 46241 "HINFO IN 7546517807581441370.7077268601497105976. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020124756s
	
	
	==> describe nodes <==
	Name:               embed-certs-270206
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-270206
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-270206
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_25_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:24:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-270206
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:26:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:26:04 +0000   Thu, 20 Nov 2025 22:24:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:26:04 +0000   Thu, 20 Nov 2025 22:24:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:26:04 +0000   Thu, 20 Nov 2025 22:24:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:26:04 +0000   Thu, 20 Nov 2025 22:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-270206
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                484a9a63-7f62-411b-a1d5-b7485838eb61
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-c5cg5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-270206                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-9sqjv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-270206             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-embed-certs-270206    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-9d84b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-270206             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  69s (x8 over 70s)  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 70s)  kubelet          Node embed-certs-270206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 70s)  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node embed-certs-270206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node embed-certs-270206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node embed-certs-270206 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-270206 event: Registered Node embed-certs-270206 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-270206 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 22:01] overlayfs: idmapped layers are currently not supported
	[Nov20 22:02] overlayfs: idmapped layers are currently not supported
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [676e9210a7e961970a967c96b10b369cb51fc52496164e18f9c52af6b369e868] <==
	{"level":"warn","ts":"2025-11-20T22:24:57.937684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:57.976572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.000065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.019857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.047882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.089881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.107190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.145517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.165153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.204168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.227985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.251800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.291111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.315885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.342871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.364668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.411652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.459863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.487594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.512810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.552912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.610694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.626096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.666707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:24:58.859800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50344","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:26:04 up  5:08,  0 user,  load average: 3.46, 3.25, 2.63
	Linux embed-certs-270206 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0abb1afea224d9fd9a35a9b67f04a03ede5623c183737f6dc2537cb96e6c3a4c] <==
	I1120 22:25:09.532667       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:25:09.532980       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 22:25:09.533101       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:25:09.533112       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:25:09.533127       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:25:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:25:09.716523       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:25:09.716539       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:25:09.716548       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:25:09.716845       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:25:39.716421       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:25:39.716546       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:25:39.717527       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:25:39.717612       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 22:25:41.317394       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:25:41.317432       1 metrics.go:72] Registering metrics
	I1120 22:25:41.317485       1 controller.go:711] "Syncing nftables rules"
	I1120 22:25:49.719117       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 22:25:49.719175       1 main.go:301] handling current node
	I1120 22:25:59.716909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 22:25:59.716979       1 main.go:301] handling current node
	
	
	==> kube-apiserver [768d9e7ed6fe20856c4efaf184820f567e71de9d76fd35406818b506cfbc466e] <==
	I1120 22:25:00.049367       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:25:00.050476       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:25:00.050663       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 22:25:00.109749       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:25:00.109987       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:25:00.116266       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 22:25:00.248913       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:25:00.676278       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 22:25:00.694121       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 22:25:00.694255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:25:01.740603       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:25:01.809900       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:25:01.962500       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 22:25:01.975670       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 22:25:01.977614       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:25:01.988975       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:25:02.734185       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:25:02.741355       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:25:02.768515       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 22:25:02.779332       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 22:25:08.469772       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:25:08.571518       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:25:08.579895       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:25:08.869893       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1120 22:26:02.299970       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:60772: use of closed network connection
	
	
	==> kube-controller-manager [ee79e994252f60b09be58c41060052b001de21ac5fe4f314e4414ca710f9a67a] <==
	I1120 22:25:07.819268       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 22:25:07.820473       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 22:25:07.828841       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:25:07.828943       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 22:25:07.832341       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 22:25:07.841545       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 22:25:07.852814       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 22:25:07.855884       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:25:07.860367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:25:07.861698       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 22:25:07.861855       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:25:07.861899       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:25:07.861928       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:25:07.867090       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 22:25:07.867212       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 22:25:07.867683       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:25:07.867748       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 22:25:07.868348       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:25:07.868862       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1120 22:25:07.868930       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:25:07.868945       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 22:25:07.868956       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 22:25:07.868964       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 22:25:07.879454       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 22:25:52.824589       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6dd87b72dc6d530f6482431c317cc24da42afdc39c35ddc72319ad6b96c1fed1] <==
	I1120 22:25:09.602178       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:25:09.735257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:25:09.836541       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:25:09.836583       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 22:25:09.836648       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:25:09.979085       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:25:09.979143       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:25:09.985764       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:25:09.986086       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:25:09.986101       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:25:09.987560       1 config.go:200] "Starting service config controller"
	I1120 22:25:09.987572       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:25:09.987587       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:25:09.987591       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:25:09.987600       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:25:09.987604       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:25:09.988214       1 config.go:309] "Starting node config controller"
	I1120 22:25:09.988221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:25:09.988226       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:25:10.088677       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:25:10.088719       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:25:10.088786       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d7c086bb8bc583f4c4823722d06747c7c898ce290395a92a1ee14da908a8a009] <==
	I1120 22:24:59.232210       1 serving.go:386] Generated self-signed cert in-memory
	W1120 22:25:01.414711       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 22:25:01.414750       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 22:25:01.414760       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 22:25:01.414767       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 22:25:01.446618       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:25:01.446747       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:25:01.449215       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:25:01.449562       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:25:01.450478       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:25:01.449591       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 22:25:01.469341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1120 22:25:02.750773       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:25:03 embed-certs-270206 kubelet[1358]: I1120 22:25:03.933060    1358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-270206" podStartSLOduration=0.933030441 podStartE2EDuration="933.030441ms" podCreationTimestamp="2025-11-20 22:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:25:03.919329382 +0000 UTC m=+1.350628677" watchObservedRunningTime="2025-11-20 22:25:03.933030441 +0000 UTC m=+1.364329735"
	Nov 20 22:25:07 embed-certs-270206 kubelet[1358]: I1120 22:25:07.807717    1358 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 22:25:07 embed-certs-270206 kubelet[1358]: I1120 22:25:07.808967    1358 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 22:25:08 embed-certs-270206 kubelet[1358]: I1120 22:25:08.943761    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/372ec000-a084-43d1-ac94-5cb64204ba40-kube-proxy\") pod \"kube-proxy-9d84b\" (UID: \"372ec000-a084-43d1-ac94-5cb64204ba40\") " pod="kube-system/kube-proxy-9d84b"
	Nov 20 22:25:08 embed-certs-270206 kubelet[1358]: I1120 22:25:08.944337    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1d0771a4-278b-44eb-a563-ab815df51728-cni-cfg\") pod \"kindnet-9sqjv\" (UID: \"1d0771a4-278b-44eb-a563-ab815df51728\") " pod="kube-system/kindnet-9sqjv"
	Nov 20 22:25:08 embed-certs-270206 kubelet[1358]: I1120 22:25:08.944496    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d0771a4-278b-44eb-a563-ab815df51728-xtables-lock\") pod \"kindnet-9sqjv\" (UID: \"1d0771a4-278b-44eb-a563-ab815df51728\") " pod="kube-system/kindnet-9sqjv"
	Nov 20 22:25:08 embed-certs-270206 kubelet[1358]: I1120 22:25:08.944623    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d0771a4-278b-44eb-a563-ab815df51728-lib-modules\") pod \"kindnet-9sqjv\" (UID: \"1d0771a4-278b-44eb-a563-ab815df51728\") " pod="kube-system/kindnet-9sqjv"
	Nov 20 22:25:08 embed-certs-270206 kubelet[1358]: I1120 22:25:08.944739    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kb2x\" (UniqueName: \"kubernetes.io/projected/1d0771a4-278b-44eb-a563-ab815df51728-kube-api-access-7kb2x\") pod \"kindnet-9sqjv\" (UID: \"1d0771a4-278b-44eb-a563-ab815df51728\") " pod="kube-system/kindnet-9sqjv"
	Nov 20 22:25:08 embed-certs-270206 kubelet[1358]: I1120 22:25:08.944864    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/372ec000-a084-43d1-ac94-5cb64204ba40-xtables-lock\") pod \"kube-proxy-9d84b\" (UID: \"372ec000-a084-43d1-ac94-5cb64204ba40\") " pod="kube-system/kube-proxy-9d84b"
	Nov 20 22:25:08 embed-certs-270206 kubelet[1358]: I1120 22:25:08.944981    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/372ec000-a084-43d1-ac94-5cb64204ba40-lib-modules\") pod \"kube-proxy-9d84b\" (UID: \"372ec000-a084-43d1-ac94-5cb64204ba40\") " pod="kube-system/kube-proxy-9d84b"
	Nov 20 22:25:08 embed-certs-270206 kubelet[1358]: I1120 22:25:08.945111    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5k7b\" (UniqueName: \"kubernetes.io/projected/372ec000-a084-43d1-ac94-5cb64204ba40-kube-api-access-v5k7b\") pod \"kube-proxy-9d84b\" (UID: \"372ec000-a084-43d1-ac94-5cb64204ba40\") " pod="kube-system/kube-proxy-9d84b"
	Nov 20 22:25:09 embed-certs-270206 kubelet[1358]: I1120 22:25:09.177129    1358 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 22:25:09 embed-certs-270206 kubelet[1358]: I1120 22:25:09.937329    1358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9sqjv" podStartSLOduration=1.937306506 podStartE2EDuration="1.937306506s" podCreationTimestamp="2025-11-20 22:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:25:09.891130938 +0000 UTC m=+7.322430240" watchObservedRunningTime="2025-11-20 22:25:09.937306506 +0000 UTC m=+7.368605800"
	Nov 20 22:25:12 embed-certs-270206 kubelet[1358]: I1120 22:25:12.200968    1358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9d84b" podStartSLOduration=4.200951242 podStartE2EDuration="4.200951242s" podCreationTimestamp="2025-11-20 22:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:25:09.975210854 +0000 UTC m=+7.406510148" watchObservedRunningTime="2025-11-20 22:25:12.200951242 +0000 UTC m=+9.632250536"
	Nov 20 22:25:50 embed-certs-270206 kubelet[1358]: I1120 22:25:50.260481    1358 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 22:25:50 embed-certs-270206 kubelet[1358]: I1120 22:25:50.473146    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npbmp\" (UniqueName: \"kubernetes.io/projected/42c2a518-d0e5-4c59-9710-7b624f63c38c-kube-api-access-npbmp\") pod \"coredns-66bc5c9577-c5cg5\" (UID: \"42c2a518-d0e5-4c59-9710-7b624f63c38c\") " pod="kube-system/coredns-66bc5c9577-c5cg5"
	Nov 20 22:25:50 embed-certs-270206 kubelet[1358]: I1120 22:25:50.473374    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/276e2ed3-8832-46cb-baf7-6accd2f37e27-tmp\") pod \"storage-provisioner\" (UID: \"276e2ed3-8832-46cb-baf7-6accd2f37e27\") " pod="kube-system/storage-provisioner"
	Nov 20 22:25:50 embed-certs-270206 kubelet[1358]: I1120 22:25:50.473515    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c2a518-d0e5-4c59-9710-7b624f63c38c-config-volume\") pod \"coredns-66bc5c9577-c5cg5\" (UID: \"42c2a518-d0e5-4c59-9710-7b624f63c38c\") " pod="kube-system/coredns-66bc5c9577-c5cg5"
	Nov 20 22:25:50 embed-certs-270206 kubelet[1358]: I1120 22:25:50.473609    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z678c\" (UniqueName: \"kubernetes.io/projected/276e2ed3-8832-46cb-baf7-6accd2f37e27-kube-api-access-z678c\") pod \"storage-provisioner\" (UID: \"276e2ed3-8832-46cb-baf7-6accd2f37e27\") " pod="kube-system/storage-provisioner"
	Nov 20 22:25:50 embed-certs-270206 kubelet[1358]: W1120 22:25:50.715657    1358 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/crio-df542c3756c90fc9a342bcdfe9cffec26af85d54eb12f84481ace84246ca5820 WatchSource:0}: Error finding container df542c3756c90fc9a342bcdfe9cffec26af85d54eb12f84481ace84246ca5820: Status 404 returned error can't find the container with id df542c3756c90fc9a342bcdfe9cffec26af85d54eb12f84481ace84246ca5820
	Nov 20 22:25:51 embed-certs-270206 kubelet[1358]: I1120 22:25:51.034550    1358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c5cg5" podStartSLOduration=42.034527899 podStartE2EDuration="42.034527899s" podCreationTimestamp="2025-11-20 22:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:25:50.994781378 +0000 UTC m=+48.426080680" watchObservedRunningTime="2025-11-20 22:25:51.034527899 +0000 UTC m=+48.465827185"
	Nov 20 22:25:51 embed-certs-270206 kubelet[1358]: I1120 22:25:51.975463    1358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.975433408 podStartE2EDuration="41.975433408s" podCreationTimestamp="2025-11-20 22:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:25:51.036308 +0000 UTC m=+48.467607294" watchObservedRunningTime="2025-11-20 22:25:51.975433408 +0000 UTC m=+49.406732702"
	Nov 20 22:25:54 embed-certs-270206 kubelet[1358]: I1120 22:25:54.209049    1358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87nf\" (UniqueName: \"kubernetes.io/projected/6afd63b7-397f-4631-b006-dd708750d125-kube-api-access-s87nf\") pod \"busybox\" (UID: \"6afd63b7-397f-4631-b006-dd708750d125\") " pod="default/busybox"
	Nov 20 22:25:54 embed-certs-270206 kubelet[1358]: W1120 22:25:54.438589    1358 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/crio-5395a91b529c381df209ac86d943b73a5462d9a8ed86aeab0fc7f73e6fb88545 WatchSource:0}: Error finding container 5395a91b529c381df209ac86d943b73a5462d9a8ed86aeab0fc7f73e6fb88545: Status 404 returned error can't find the container with id 5395a91b529c381df209ac86d943b73a5462d9a8ed86aeab0fc7f73e6fb88545
	Nov 20 22:26:03 embed-certs-270206 kubelet[1358]: E1120 22:26:03.033105    1358 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-6beb859a3ebf76c681c85419aa4ddc32d38fc9b0a76fc02775b86a21dcfe1dc4\": RecentStats: unable to find data in memory cache]"
	
	
	==> storage-provisioner [df1548e3e73c7d6ddbcb832947bed11b4a4d2951dd9ca865c1aa1dda79b21a53] <==
	I1120 22:25:50.780796       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 22:25:50.832842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:25:50.832960       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 22:25:50.847563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:50.861490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:25:50.861724       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:25:50.861931       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-270206_40cd9000-a6a5-4d0b-905e-97af44d0c87f!
	I1120 22:25:50.862389       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f3cea17-cd64-4701-9269-df7a7dbcb868", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-270206_40cd9000-a6a5-4d0b-905e-97af44d0c87f became leader
	W1120 22:25:50.872135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:50.895223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:25:50.967110       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-270206_40cd9000-a6a5-4d0b-905e-97af44d0c87f!
	W1120 22:25:52.902445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:52.908530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:54.912410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:54.922552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:56.925985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:56.933278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:58.936319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:25:58.941252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:00.944881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:00.950004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:02.963102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:02.972503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-270206 -n embed-certs-270206
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-270206 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-559701 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-559701 --alsologtostderr -v=1: exit status 80 (2.47972445s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-559701 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 22:26:54.529334 1036891 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:26:54.529541 1036891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:26:54.529568 1036891 out.go:374] Setting ErrFile to fd 2...
	I1120 22:26:54.529588 1036891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:26:54.530040 1036891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:26:54.530438 1036891 out.go:368] Setting JSON to false
	I1120 22:26:54.530490 1036891 mustload.go:66] Loading cluster: default-k8s-diff-port-559701
	I1120 22:26:54.531657 1036891 config.go:182] Loaded profile config "default-k8s-diff-port-559701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:26:54.532300 1036891 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-559701 --format={{.State.Status}}
	I1120 22:26:54.550144 1036891 host.go:66] Checking if "default-k8s-diff-port-559701" exists ...
	I1120 22:26:54.550578 1036891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:26:54.615565 1036891 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 22:26:54.606113531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:26:54.616196 1036891 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-559701 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 22:26:54.619744 1036891 out.go:179] * Pausing node default-k8s-diff-port-559701 ... 
	I1120 22:26:54.623614 1036891 host.go:66] Checking if "default-k8s-diff-port-559701" exists ...
	I1120 22:26:54.623981 1036891 ssh_runner.go:195] Run: systemctl --version
	I1120 22:26:54.624031 1036891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-559701
	I1120 22:26:54.645006 1036891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/default-k8s-diff-port-559701/id_rsa Username:docker}
	I1120 22:26:54.746305 1036891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:26:54.764437 1036891 pause.go:52] kubelet running: true
	I1120 22:26:54.764525 1036891 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:26:55.031491 1036891 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:26:55.031597 1036891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:26:55.106273 1036891 cri.go:89] found id: "c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c"
	I1120 22:26:55.106349 1036891 cri.go:89] found id: "978f68cdd75cb6ba1a4707d81fabaa6706e4b0e8b6fcaace8452d6080183c3ac"
	I1120 22:26:55.106371 1036891 cri.go:89] found id: "71ac6e6796c03c7fb8d831ed11b785c9b2c4a26e730aadb906054e37e9d71d56"
	I1120 22:26:55.106400 1036891 cri.go:89] found id: "0f799208041082e605140f3d4caab1ef18ec66f7efd50760890b4593e204bb88"
	I1120 22:26:55.106411 1036891 cri.go:89] found id: "5fd128cd31c50bca5a1687270aadf6c6a1bf19093abae39c49f64e02a3647fba"
	I1120 22:26:55.106415 1036891 cri.go:89] found id: "5a6629b69c5e0d8e000cdd414ba97d90c5b7a7e59914d41eb655c3968aad1a0c"
	I1120 22:26:55.106418 1036891 cri.go:89] found id: "f420a3f656763afb77ad4591b661d794b5ba1e728742d94c9f2a35b5d946b367"
	I1120 22:26:55.106421 1036891 cri.go:89] found id: "24e3b3c58fa5dc48ddc4f9d5406e8ee808c9a30a31a0509d6f7eacbc5ebb4a41"
	I1120 22:26:55.106424 1036891 cri.go:89] found id: "1d71c5df1fe3fb7bc49ab400af58339d6f0dbb2f7f20480e8fca0999b681c9bb"
	I1120 22:26:55.106431 1036891 cri.go:89] found id: "820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	I1120 22:26:55.106434 1036891 cri.go:89] found id: "f46a136c47f729995c7015f57754a197f8024a568665f2ed05d801a225a32dcb"
	I1120 22:26:55.106438 1036891 cri.go:89] found id: ""
	I1120 22:26:55.106493 1036891 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:26:55.118926 1036891 retry.go:31] will retry after 182.366609ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:26:55Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:26:55.302321 1036891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:26:55.316070 1036891 pause.go:52] kubelet running: false
	I1120 22:26:55.316133 1036891 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:26:55.514877 1036891 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:26:55.515045 1036891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:26:55.588534 1036891 cri.go:89] found id: "c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c"
	I1120 22:26:55.588559 1036891 cri.go:89] found id: "978f68cdd75cb6ba1a4707d81fabaa6706e4b0e8b6fcaace8452d6080183c3ac"
	I1120 22:26:55.588564 1036891 cri.go:89] found id: "71ac6e6796c03c7fb8d831ed11b785c9b2c4a26e730aadb906054e37e9d71d56"
	I1120 22:26:55.588568 1036891 cri.go:89] found id: "0f799208041082e605140f3d4caab1ef18ec66f7efd50760890b4593e204bb88"
	I1120 22:26:55.588572 1036891 cri.go:89] found id: "5fd128cd31c50bca5a1687270aadf6c6a1bf19093abae39c49f64e02a3647fba"
	I1120 22:26:55.588575 1036891 cri.go:89] found id: "5a6629b69c5e0d8e000cdd414ba97d90c5b7a7e59914d41eb655c3968aad1a0c"
	I1120 22:26:55.588578 1036891 cri.go:89] found id: "f420a3f656763afb77ad4591b661d794b5ba1e728742d94c9f2a35b5d946b367"
	I1120 22:26:55.588581 1036891 cri.go:89] found id: "24e3b3c58fa5dc48ddc4f9d5406e8ee808c9a30a31a0509d6f7eacbc5ebb4a41"
	I1120 22:26:55.588585 1036891 cri.go:89] found id: "1d71c5df1fe3fb7bc49ab400af58339d6f0dbb2f7f20480e8fca0999b681c9bb"
	I1120 22:26:55.588614 1036891 cri.go:89] found id: "820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	I1120 22:26:55.588625 1036891 cri.go:89] found id: "f46a136c47f729995c7015f57754a197f8024a568665f2ed05d801a225a32dcb"
	I1120 22:26:55.588629 1036891 cri.go:89] found id: ""
	I1120 22:26:55.588686 1036891 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:26:55.600091 1036891 retry.go:31] will retry after 426.997632ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:26:55Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:26:56.027861 1036891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:26:56.041894 1036891 pause.go:52] kubelet running: false
	I1120 22:26:56.041990 1036891 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:26:56.245536 1036891 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:26:56.245648 1036891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:26:56.329908 1036891 cri.go:89] found id: "c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c"
	I1120 22:26:56.329937 1036891 cri.go:89] found id: "978f68cdd75cb6ba1a4707d81fabaa6706e4b0e8b6fcaace8452d6080183c3ac"
	I1120 22:26:56.329943 1036891 cri.go:89] found id: "71ac6e6796c03c7fb8d831ed11b785c9b2c4a26e730aadb906054e37e9d71d56"
	I1120 22:26:56.329946 1036891 cri.go:89] found id: "0f799208041082e605140f3d4caab1ef18ec66f7efd50760890b4593e204bb88"
	I1120 22:26:56.329950 1036891 cri.go:89] found id: "5fd128cd31c50bca5a1687270aadf6c6a1bf19093abae39c49f64e02a3647fba"
	I1120 22:26:56.329954 1036891 cri.go:89] found id: "5a6629b69c5e0d8e000cdd414ba97d90c5b7a7e59914d41eb655c3968aad1a0c"
	I1120 22:26:56.329957 1036891 cri.go:89] found id: "f420a3f656763afb77ad4591b661d794b5ba1e728742d94c9f2a35b5d946b367"
	I1120 22:26:56.329961 1036891 cri.go:89] found id: "24e3b3c58fa5dc48ddc4f9d5406e8ee808c9a30a31a0509d6f7eacbc5ebb4a41"
	I1120 22:26:56.329988 1036891 cri.go:89] found id: "1d71c5df1fe3fb7bc49ab400af58339d6f0dbb2f7f20480e8fca0999b681c9bb"
	I1120 22:26:56.330004 1036891 cri.go:89] found id: "820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	I1120 22:26:56.330009 1036891 cri.go:89] found id: "f46a136c47f729995c7015f57754a197f8024a568665f2ed05d801a225a32dcb"
	I1120 22:26:56.330012 1036891 cri.go:89] found id: ""
	I1120 22:26:56.330084 1036891 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:26:56.341498 1036891 retry.go:31] will retry after 291.44934ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:26:56Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:26:56.634060 1036891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:26:56.647667 1036891 pause.go:52] kubelet running: false
	I1120 22:26:56.647750 1036891 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:26:56.839845 1036891 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:26:56.839923 1036891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:26:56.915383 1036891 cri.go:89] found id: "c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c"
	I1120 22:26:56.915407 1036891 cri.go:89] found id: "978f68cdd75cb6ba1a4707d81fabaa6706e4b0e8b6fcaace8452d6080183c3ac"
	I1120 22:26:56.915412 1036891 cri.go:89] found id: "71ac6e6796c03c7fb8d831ed11b785c9b2c4a26e730aadb906054e37e9d71d56"
	I1120 22:26:56.915415 1036891 cri.go:89] found id: "0f799208041082e605140f3d4caab1ef18ec66f7efd50760890b4593e204bb88"
	I1120 22:26:56.915418 1036891 cri.go:89] found id: "5fd128cd31c50bca5a1687270aadf6c6a1bf19093abae39c49f64e02a3647fba"
	I1120 22:26:56.915422 1036891 cri.go:89] found id: "5a6629b69c5e0d8e000cdd414ba97d90c5b7a7e59914d41eb655c3968aad1a0c"
	I1120 22:26:56.915425 1036891 cri.go:89] found id: "f420a3f656763afb77ad4591b661d794b5ba1e728742d94c9f2a35b5d946b367"
	I1120 22:26:56.915428 1036891 cri.go:89] found id: "24e3b3c58fa5dc48ddc4f9d5406e8ee808c9a30a31a0509d6f7eacbc5ebb4a41"
	I1120 22:26:56.915430 1036891 cri.go:89] found id: "1d71c5df1fe3fb7bc49ab400af58339d6f0dbb2f7f20480e8fca0999b681c9bb"
	I1120 22:26:56.915474 1036891 cri.go:89] found id: "820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	I1120 22:26:56.915485 1036891 cri.go:89] found id: "f46a136c47f729995c7015f57754a197f8024a568665f2ed05d801a225a32dcb"
	I1120 22:26:56.915493 1036891 cri.go:89] found id: ""
	I1120 22:26:56.915560 1036891 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:26:56.930530 1036891 out.go:203] 
	W1120 22:26:56.933563 1036891 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:26:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:26:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 22:26:56.933631 1036891 out.go:285] * 
	* 
	W1120 22:26:56.942942 1036891 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 22:26:56.945951 1036891 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-559701 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-559701
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-559701:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225",
	        "Created": "2025-11-20T22:23:58.497614948Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1031845,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:25:46.428971384Z",
	            "FinishedAt": "2025-11-20T22:25:45.579925223Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/hostname",
	        "HostsPath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/hosts",
	        "LogPath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225-json.log",
	        "Name": "/default-k8s-diff-port-559701",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-559701:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-559701",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225",
	                "LowerDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-559701",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-559701/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-559701",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-559701",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-559701",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "665355898c11ac8f708d14bf7a2c51ea90e6420bf85e66ceab32f8ef9822d902",
	            "SandboxKey": "/var/run/docker/netns/665355898c11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34177"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34179"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34180"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-559701": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:f4:05:b4:50:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f87df3640a96e74282a6fa8d1f119c94634bd199cb6db600d19a35606adfa81c",
	                    "EndpointID": "79fc9539923ae76d6f8b6a0f42b6216206a984cb39ae8e4751cfb47183aea6cc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-559701",
	                        "dec634595af0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701: exit status 2 (355.728027ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-559701 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-559701 logs -n 25: (1.435681522s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-961311 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ delete  │ -p cert-options-961311                                                                                                                                                                                                                        │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │                     │
	│ stop    │ -p old-k8s-version-443192 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-443192 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:23 UTC │
	│ image   │ old-k8s-version-443192 image list --format=json                                                                                                                                                                                               │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ pause   │ -p old-k8s-version-443192 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │                     │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:24 UTC │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:25 UTC │
	│ delete  │ -p cert-expiration-420078                                                                                                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:24 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-559701 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ stop    │ -p embed-certs-270206 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-270206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ image   │ default-k8s-diff-port-559701 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:26:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:26:18.408688 1034660 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:26:18.409041 1034660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:26:18.409082 1034660 out.go:374] Setting ErrFile to fd 2...
	I1120 22:26:18.409128 1034660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:26:18.409586 1034660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:26:18.410171 1034660 out.go:368] Setting JSON to false
	I1120 22:26:18.411537 1034660 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18504,"bootTime":1763659075,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:26:18.411667 1034660 start.go:143] virtualization:  
	I1120 22:26:18.415065 1034660 out.go:179] * [embed-certs-270206] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:26:18.419117 1034660 notify.go:221] Checking for updates...
	I1120 22:26:18.419734 1034660 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:26:18.423386 1034660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:26:18.426519 1034660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:26:18.429649 1034660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:26:18.433069 1034660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:26:18.436148 1034660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:26:18.439729 1034660 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:26:18.440300 1034660 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:26:18.464251 1034660 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:26:18.464484 1034660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:26:18.533255 1034660 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:26:18.523527362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:26:18.533370 1034660 docker.go:319] overlay module found
	I1120 22:26:18.536517 1034660 out.go:179] * Using the docker driver based on existing profile
	I1120 22:26:18.539512 1034660 start.go:309] selected driver: docker
	I1120 22:26:18.539583 1034660 start.go:930] validating driver "docker" against &{Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:26:18.539691 1034660 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:26:18.540505 1034660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:26:18.596503 1034660 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:26:18.587072606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:26:18.596843 1034660 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:26:18.596879 1034660 cni.go:84] Creating CNI manager for ""
	I1120 22:26:18.596936 1034660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:26:18.596977 1034660 start.go:353] cluster config:
	{Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:26:18.600130 1034660 out.go:179] * Starting "embed-certs-270206" primary control-plane node in "embed-certs-270206" cluster
	I1120 22:26:18.603059 1034660 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:26:18.606139 1034660 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:26:18.609042 1034660 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:26:18.609091 1034660 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:26:18.609102 1034660 cache.go:65] Caching tarball of preloaded images
	I1120 22:26:18.609461 1034660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:26:18.609685 1034660 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:26:18.609697 1034660 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:26:18.609825 1034660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/config.json ...
	I1120 22:26:18.633146 1034660 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:26:18.633169 1034660 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:26:18.633188 1034660 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:26:18.633212 1034660 start.go:360] acquireMachinesLock for embed-certs-270206: {Name:mk80d30c009178e97eae54d0fb9c0edcaf285b3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:26:18.633279 1034660 start.go:364] duration metric: took 46.441µs to acquireMachinesLock for "embed-certs-270206"
	I1120 22:26:18.633307 1034660 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:26:18.633317 1034660 fix.go:54] fixHost starting: 
	I1120 22:26:18.633565 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:18.650560 1034660 fix.go:112] recreateIfNeeded on embed-certs-270206: state=Stopped err=<nil>
	W1120 22:26:18.650594 1034660 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 22:26:16.501734 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:18.503808 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:21.001158 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	I1120 22:26:18.653721 1034660 out.go:252] * Restarting existing docker container for "embed-certs-270206" ...
	I1120 22:26:18.653821 1034660 cli_runner.go:164] Run: docker start embed-certs-270206
	I1120 22:26:18.931397 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:18.953990 1034660 kic.go:430] container "embed-certs-270206" state is running.
	I1120 22:26:18.954580 1034660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-270206
	I1120 22:26:18.976516 1034660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/config.json ...
	I1120 22:26:18.976745 1034660 machine.go:94] provisionDockerMachine start ...
	I1120 22:26:18.976811 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:18.998881 1034660 main.go:143] libmachine: Using SSH client type: native
	I1120 22:26:18.999295 1034660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1120 22:26:18.999313 1034660 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:26:19.000381 1034660 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:26:22.146672 1034660 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-270206
	
	I1120 22:26:22.146697 1034660 ubuntu.go:182] provisioning hostname "embed-certs-270206"
	I1120 22:26:22.146764 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:22.164994 1034660 main.go:143] libmachine: Using SSH client type: native
	I1120 22:26:22.165347 1034660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1120 22:26:22.165369 1034660 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-270206 && echo "embed-certs-270206" | sudo tee /etc/hostname
	I1120 22:26:22.330820 1034660 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-270206
	
	I1120 22:26:22.331006 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:22.351089 1034660 main.go:143] libmachine: Using SSH client type: native
	I1120 22:26:22.351428 1034660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1120 22:26:22.351451 1034660 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-270206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-270206/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-270206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:26:22.495469 1034660 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:26:22.495500 1034660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:26:22.495534 1034660 ubuntu.go:190] setting up certificates
	I1120 22:26:22.495544 1034660 provision.go:84] configureAuth start
	I1120 22:26:22.495621 1034660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-270206
	I1120 22:26:22.514786 1034660 provision.go:143] copyHostCerts
	I1120 22:26:22.514862 1034660 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:26:22.514881 1034660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:26:22.514956 1034660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:26:22.515099 1034660 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:26:22.515112 1034660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:26:22.515141 1034660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:26:22.515197 1034660 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:26:22.515206 1034660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:26:22.515231 1034660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:26:22.515289 1034660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.embed-certs-270206 san=[127.0.0.1 192.168.76.2 embed-certs-270206 localhost minikube]
	I1120 22:26:22.719743 1034660 provision.go:177] copyRemoteCerts
	I1120 22:26:22.719813 1034660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:26:22.719862 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:22.738478 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:22.838803 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1120 22:26:22.857736 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:26:22.876790 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:26:22.895199 1034660 provision.go:87] duration metric: took 399.625611ms to configureAuth
	I1120 22:26:22.895227 1034660 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:26:22.895472 1034660 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:26:22.895584 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:22.916967 1034660 main.go:143] libmachine: Using SSH client type: native
	I1120 22:26:22.917291 1034660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1120 22:26:22.917309 1034660 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:26:23.270665 1034660 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:26:23.270693 1034660 machine.go:97] duration metric: took 4.293934879s to provisionDockerMachine
	I1120 22:26:23.270704 1034660 start.go:293] postStartSetup for "embed-certs-270206" (driver="docker")
	I1120 22:26:23.270715 1034660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:26:23.270777 1034660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:26:23.270822 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:23.290153 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:23.391221 1034660 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:26:23.394821 1034660 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:26:23.394848 1034660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:26:23.394858 1034660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:26:23.394911 1034660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:26:23.395015 1034660 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:26:23.395119 1034660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:26:23.402729 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:26:23.421801 1034660 start.go:296] duration metric: took 151.081098ms for postStartSetup
	I1120 22:26:23.421905 1034660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:26:23.421967 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:23.441204 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:23.540040 1034660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:26:23.544943 1034660 fix.go:56] duration metric: took 4.911618702s for fixHost
	I1120 22:26:23.544969 1034660 start.go:83] releasing machines lock for "embed-certs-270206", held for 4.911673382s
	I1120 22:26:23.545039 1034660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-270206
	I1120 22:26:23.561683 1034660 ssh_runner.go:195] Run: cat /version.json
	I1120 22:26:23.561748 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:23.562007 1034660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:26:23.562070 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:23.586831 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:23.603067 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:23.690675 1034660 ssh_runner.go:195] Run: systemctl --version
	I1120 22:26:23.796657 1034660 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:26:23.842109 1034660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:26:23.847460 1034660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:26:23.847545 1034660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:26:23.855671 1034660 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:26:23.855699 1034660 start.go:496] detecting cgroup driver to use...
	I1120 22:26:23.855731 1034660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:26:23.855793 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:26:23.872394 1034660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:26:23.886153 1034660 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:26:23.886217 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:26:23.904928 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:26:23.918902 1034660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:26:24.037707 1034660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:26:24.167873 1034660 docker.go:234] disabling docker service ...
	I1120 22:26:24.167969 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:26:24.183405 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:26:24.196730 1034660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:26:24.326592 1034660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:26:24.449315 1034660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:26:24.465334 1034660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:26:24.480321 1034660 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:26:24.480442 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.489639 1034660 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:26:24.489730 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.504294 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.513680 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.522754 1034660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:26:24.531375 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.541011 1034660 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.550118 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.558919 1034660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:26:24.567055 1034660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:26:24.574540 1034660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:26:24.693089 1034660 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:26:24.878346 1034660 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:26:24.878449 1034660 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:26:24.882367 1034660 start.go:564] Will wait 60s for crictl version
	I1120 22:26:24.882458 1034660 ssh_runner.go:195] Run: which crictl
	I1120 22:26:24.886343 1034660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:26:24.919047 1034660 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:26:24.919135 1034660 ssh_runner.go:195] Run: crio --version
	I1120 22:26:24.951928 1034660 ssh_runner.go:195] Run: crio --version
	I1120 22:26:24.987561 1034660 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1120 22:26:23.001361 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:25.011753 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	I1120 22:26:24.990401 1034660 cli_runner.go:164] Run: docker network inspect embed-certs-270206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:26:25.014554 1034660 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 22:26:25.019094 1034660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:26:25.031754 1034660 kubeadm.go:884] updating cluster {Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:26:25.031885 1034660 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:26:25.031941 1034660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:26:25.071882 1034660 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:26:25.071909 1034660 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:26:25.071974 1034660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:26:25.100054 1034660 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:26:25.100080 1034660 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:26:25.100088 1034660 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1120 22:26:25.100204 1034660 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-270206 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:26:25.100314 1034660 ssh_runner.go:195] Run: crio config
	I1120 22:26:25.157253 1034660 cni.go:84] Creating CNI manager for ""
	I1120 22:26:25.157279 1034660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:26:25.157323 1034660 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:26:25.157352 1034660 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-270206 NodeName:embed-certs-270206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:26:25.157494 1034660 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-270206"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:26:25.157568 1034660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:26:25.165992 1034660 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:26:25.166105 1034660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:26:25.174403 1034660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 22:26:25.187462 1034660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:26:25.200386 1034660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1120 22:26:25.213473 1034660 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:26:25.217650 1034660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:26:25.227773 1034660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:26:25.356302 1034660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:26:25.375112 1034660 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206 for IP: 192.168.76.2
	I1120 22:26:25.375139 1034660 certs.go:195] generating shared ca certs ...
	I1120 22:26:25.375156 1034660 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:26:25.375310 1034660 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:26:25.375365 1034660 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:26:25.375376 1034660 certs.go:257] generating profile certs ...
	I1120 22:26:25.375482 1034660 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/client.key
	I1120 22:26:25.375556 1034660 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key.ed27b386
	I1120 22:26:25.375607 1034660 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.key
	I1120 22:26:25.375723 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:26:25.375759 1034660 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:26:25.375772 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:26:25.375808 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:26:25.375835 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:26:25.375862 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:26:25.375906 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:26:25.377008 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:26:25.406215 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:26:25.430507 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:26:25.454177 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:26:25.479535 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1120 22:26:25.511320 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:26:25.534139 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:26:25.556171 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 22:26:25.590809 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:26:25.619083 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:26:25.639317 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:26:25.664563 1034660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:26:25.679328 1034660 ssh_runner.go:195] Run: openssl version
	I1120 22:26:25.686212 1034660 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:26:25.694121 1034660 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:26:25.702221 1034660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:26:25.706144 1034660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:26:25.706234 1034660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:26:25.749072 1034660 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:26:25.756769 1034660 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:26:25.764392 1034660 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:26:25.772363 1034660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:26:25.776246 1034660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:26:25.776309 1034660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:26:25.817560 1034660 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:26:25.826570 1034660 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:26:25.834237 1034660 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:26:25.841942 1034660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:26:25.845579 1034660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:26:25.845665 1034660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:26:25.886928 1034660 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:26:25.894502 1034660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:26:25.901444 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:26:25.946530 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:26:25.989968 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:26:26.038267 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:26:26.100343 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:26:26.169854 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:26:26.257232 1034660 kubeadm.go:401] StartCluster: {Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:26:26.257327 1034660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:26:26.257396 1034660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:26:26.301229 1034660 cri.go:89] found id: "3b1fee8d5af72e2b534ec4e7ad37bec76a977b37fb8d8cd98bdabfae224ac824"
	I1120 22:26:26.301252 1034660 cri.go:89] found id: "0e18c657e0d1a0e87220cc83c18f4b5c5413a4677fa9b2ca5752a5267bead913"
	I1120 22:26:26.301258 1034660 cri.go:89] found id: "ea0c8d065057f3665d6ec3035564aee5d8e6850f708052453e6159677f28f712"
	I1120 22:26:26.301262 1034660 cri.go:89] found id: "a5edded9820b755f34e9b6d2593a3430839d72f1039a85a103ebda708afb8677"
	I1120 22:26:26.301273 1034660 cri.go:89] found id: ""
	I1120 22:26:26.301322 1034660 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:26:26.315085 1034660 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:26:26Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:26:26.315161 1034660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:26:26.326609 1034660 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:26:26.326637 1034660 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:26:26.326690 1034660 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:26:26.341215 1034660 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:26:26.341784 1034660 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-270206" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:26:26.342071 1034660 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-270206" cluster setting kubeconfig missing "embed-certs-270206" context setting]
	I1120 22:26:26.342520 1034660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:26:26.344233 1034660 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:26:26.359479 1034660 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1120 22:26:26.359518 1034660 kubeadm.go:602] duration metric: took 32.872948ms to restartPrimaryControlPlane
	I1120 22:26:26.359527 1034660 kubeadm.go:403] duration metric: took 102.307596ms to StartCluster
	I1120 22:26:26.359543 1034660 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:26:26.359635 1034660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:26:26.360899 1034660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:26:26.361350 1034660 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:26:26.361415 1034660 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:26:26.361471 1034660 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:26:26.361542 1034660 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-270206"
	I1120 22:26:26.361561 1034660 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-270206"
	W1120 22:26:26.361581 1034660 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:26:26.361605 1034660 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:26:26.362067 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:26.362555 1034660 addons.go:70] Setting default-storageclass=true in profile "embed-certs-270206"
	I1120 22:26:26.362585 1034660 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-270206"
	I1120 22:26:26.362653 1034660 addons.go:70] Setting dashboard=true in profile "embed-certs-270206"
	I1120 22:26:26.362687 1034660 addons.go:239] Setting addon dashboard=true in "embed-certs-270206"
	W1120 22:26:26.362706 1034660 addons.go:248] addon dashboard should already be in state true
	I1120 22:26:26.362758 1034660 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:26:26.362876 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:26.363359 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:26.376868 1034660 out.go:179] * Verifying Kubernetes components...
	I1120 22:26:26.385839 1034660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:26:26.406633 1034660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:26:26.409576 1034660 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:26:26.409599 1034660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:26:26.409665 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:26.412456 1034660 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:26:26.415476 1034660 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:26:26.418889 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:26:26.418923 1034660 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:26:26.419086 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:26.425605 1034660 addons.go:239] Setting addon default-storageclass=true in "embed-certs-270206"
	W1120 22:26:26.425630 1034660 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:26:26.425653 1034660 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:26:26.426077 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:26.471105 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:26.472840 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:26.483255 1034660 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:26:26.483288 1034660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:26:26.483357 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:26.515215 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:26.711841 1034660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:26:26.727990 1034660 node_ready.go:35] waiting up to 6m0s for node "embed-certs-270206" to be "Ready" ...
	I1120 22:26:26.777156 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:26:26.777232 1034660 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:26:26.793017 1034660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:26:26.819032 1034660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:26:26.856415 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:26:26.856442 1034660 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:26:26.889571 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:26:26.889598 1034660 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:26:26.976490 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:26:26.976516 1034660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:26:27.081650 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:26:27.081678 1034660 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:26:27.106868 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:26:27.106894 1034660 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:26:27.137962 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:26:27.137988 1034660 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:26:27.158997 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:26:27.159020 1034660 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:26:27.183739 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:26:27.183769 1034660 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:26:27.214417 1034660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1120 22:26:27.501892 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:30.000943 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	I1120 22:26:31.115458 1034660 node_ready.go:49] node "embed-certs-270206" is "Ready"
	I1120 22:26:31.115491 1034660 node_ready.go:38] duration metric: took 4.387421846s for node "embed-certs-270206" to be "Ready" ...
	I1120 22:26:31.115506 1034660 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:26:31.115572 1034660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:26:33.195427 1034660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.4023189s)
	I1120 22:26:33.195493 1034660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.376390789s)
	I1120 22:26:33.308282 1034660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.093818286s)
	I1120 22:26:33.308571 1034660 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.192980952s)
	I1120 22:26:33.308638 1034660 api_server.go:72] duration metric: took 6.947194099s to wait for apiserver process to appear ...
	I1120 22:26:33.308664 1034660 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:26:33.308711 1034660 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1120 22:26:33.311756 1034660 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-270206 addons enable metrics-server
	
	I1120 22:26:33.314616 1034660 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1120 22:26:33.317446 1034660 addons.go:515] duration metric: took 6.955962319s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1120 22:26:33.323552 1034660 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1120 22:26:33.325421 1034660 api_server.go:141] control plane version: v1.34.1
	I1120 22:26:33.325452 1034660 api_server.go:131] duration metric: took 16.767822ms to wait for apiserver health ...
	I1120 22:26:33.325463 1034660 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:26:33.333128 1034660 system_pods.go:59] 8 kube-system pods found
	I1120 22:26:33.333164 1034660 system_pods.go:61] "coredns-66bc5c9577-c5cg5" [42c2a518-d0e5-4c59-9710-7b624f63c38c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:26:33.333174 1034660 system_pods.go:61] "etcd-embed-certs-270206" [5e65bc97-d5f1-43e1-98a3-e9fbf1523362] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:26:33.333181 1034660 system_pods.go:61] "kindnet-9sqjv" [1d0771a4-278b-44eb-a563-ab815df51728] Running
	I1120 22:26:33.333188 1034660 system_pods.go:61] "kube-apiserver-embed-certs-270206" [86e699be-1798-428d-a223-8682e8ddfd6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:26:33.333200 1034660 system_pods.go:61] "kube-controller-manager-embed-certs-270206" [afe1bea4-7588-46af-8287-363bad438880] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:26:33.333208 1034660 system_pods.go:61] "kube-proxy-9d84b" [372ec000-a084-43d1-ac94-5cb64204ba40] Running
	I1120 22:26:33.333215 1034660 system_pods.go:61] "kube-scheduler-embed-certs-270206" [ab91a905-69f6-42ce-98a7-b166339a6d6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:26:33.333228 1034660 system_pods.go:61] "storage-provisioner" [276e2ed3-8832-46cb-baf7-6accd2f37e27] Running
	I1120 22:26:33.333236 1034660 system_pods.go:74] duration metric: took 7.767747ms to wait for pod list to return data ...
	I1120 22:26:33.333248 1034660 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:26:33.338696 1034660 default_sa.go:45] found service account: "default"
	I1120 22:26:33.338721 1034660 default_sa.go:55] duration metric: took 5.466548ms for default service account to be created ...
	I1120 22:26:33.338730 1034660 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:26:33.344550 1034660 system_pods.go:86] 8 kube-system pods found
	I1120 22:26:33.344588 1034660 system_pods.go:89] "coredns-66bc5c9577-c5cg5" [42c2a518-d0e5-4c59-9710-7b624f63c38c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:26:33.344597 1034660 system_pods.go:89] "etcd-embed-certs-270206" [5e65bc97-d5f1-43e1-98a3-e9fbf1523362] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:26:33.344603 1034660 system_pods.go:89] "kindnet-9sqjv" [1d0771a4-278b-44eb-a563-ab815df51728] Running
	I1120 22:26:33.344610 1034660 system_pods.go:89] "kube-apiserver-embed-certs-270206" [86e699be-1798-428d-a223-8682e8ddfd6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:26:33.344616 1034660 system_pods.go:89] "kube-controller-manager-embed-certs-270206" [afe1bea4-7588-46af-8287-363bad438880] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:26:33.344621 1034660 system_pods.go:89] "kube-proxy-9d84b" [372ec000-a084-43d1-ac94-5cb64204ba40] Running
	I1120 22:26:33.344630 1034660 system_pods.go:89] "kube-scheduler-embed-certs-270206" [ab91a905-69f6-42ce-98a7-b166339a6d6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:26:33.344634 1034660 system_pods.go:89] "storage-provisioner" [276e2ed3-8832-46cb-baf7-6accd2f37e27] Running
	I1120 22:26:33.344641 1034660 system_pods.go:126] duration metric: took 5.905028ms to wait for k8s-apps to be running ...
	I1120 22:26:33.344652 1034660 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:26:33.344732 1034660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:26:33.384503 1034660 system_svc.go:56] duration metric: took 39.826555ms WaitForService to wait for kubelet
	I1120 22:26:33.384592 1034660 kubeadm.go:587] duration metric: took 7.023146534s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:26:33.384626 1034660 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:26:33.389111 1034660 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:26:33.389203 1034660 node_conditions.go:123] node cpu capacity is 2
	I1120 22:26:33.389235 1034660 node_conditions.go:105] duration metric: took 4.585593ms to run NodePressure ...
	I1120 22:26:33.389290 1034660 start.go:242] waiting for startup goroutines ...
	I1120 22:26:33.389316 1034660 start.go:247] waiting for cluster config update ...
	I1120 22:26:33.389356 1034660 start.go:256] writing updated cluster config ...
	I1120 22:26:33.389772 1034660 ssh_runner.go:195] Run: rm -f paused
	I1120 22:26:33.399635 1034660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:26:33.404580 1034660 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5cg5" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:26:32.001892 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:34.502529 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:35.410870 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:37.911491 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:37.002076 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:39.502828 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	I1120 22:26:41.002386 1031720 pod_ready.go:94] pod "coredns-66bc5c9577-kdh8n" is "Ready"
	I1120 22:26:41.002416 1031720 pod_ready.go:86] duration metric: took 38.006803069s for pod "coredns-66bc5c9577-kdh8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.006914 1031720 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.012729 1031720 pod_ready.go:94] pod "etcd-default-k8s-diff-port-559701" is "Ready"
	I1120 22:26:41.012757 1031720 pod_ready.go:86] duration metric: took 5.812932ms for pod "etcd-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.016637 1031720 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.025333 1031720 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-559701" is "Ready"
	I1120 22:26:41.025360 1031720 pod_ready.go:86] duration metric: took 8.695726ms for pod "kube-apiserver-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.028104 1031720 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.200526 1031720 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-559701" is "Ready"
	I1120 22:26:41.200555 1031720 pod_ready.go:86] duration metric: took 172.424404ms for pod "kube-controller-manager-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.400506 1031720 pod_ready.go:83] waiting for pod "kube-proxy-q6lq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.799771 1031720 pod_ready.go:94] pod "kube-proxy-q6lq4" is "Ready"
	I1120 22:26:41.799799 1031720 pod_ready.go:86] duration metric: took 399.266664ms for pod "kube-proxy-q6lq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:42.000368 1031720 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:42.401419 1031720 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-559701" is "Ready"
	I1120 22:26:42.401463 1031720 pod_ready.go:86] duration metric: took 401.022173ms for pod "kube-scheduler-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:42.401477 1031720 pod_ready.go:40] duration metric: took 39.413168884s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:26:42.498179 1031720 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:26:42.502654 1031720 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-559701" cluster and "default" namespace by default
	W1120 22:26:39.911726 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:42.413357 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:44.909906 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:46.910678 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:49.411670 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:51.909913 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 20 22:26:29 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:29.37491767Z" level=info msg="Removed container 7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9/dashboard-metrics-scraper" id=3747d358-115f-438b-9be8-f2ab09a246ac name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:26:31 default-k8s-diff-port-559701 conmon[1174]: conmon 71ac6e6796c03c7fb8d8 <ninfo>: container 1185 exited with status 1
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.360210701Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9c18b670-aee9-4ef0-9348-a3c85b7e3a5b name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.361572015Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9912e82b-e642-46ab-8de4-50b913b65e0d name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.362582129Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=243b3a44-3e97-4841-8627-5ed4368509ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.362690011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.379598175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.380006845Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/773439562dfa0ec3dd207b72bc565c572003f0362cc9d418bbb57eb9f2e52906/merged/etc/passwd: no such file or directory"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.380113185Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/773439562dfa0ec3dd207b72bc565c572003f0362cc9d418bbb57eb9f2e52906/merged/etc/group: no such file or directory"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.380495483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.425067223Z" level=info msg="Created container c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c: kube-system/storage-provisioner/storage-provisioner" id=243b3a44-3e97-4841-8627-5ed4368509ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.426349086Z" level=info msg="Starting container: c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c" id=3f87d98c-8ece-4912-9f8f-1a15ad331426 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.428634949Z" level=info msg="Started container" PID=1644 containerID=c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c description=kube-system/storage-provisioner/storage-provisioner id=3f87d98c-8ece-4912-9f8f-1a15ad331426 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b70b03cc667abe2e929031055ec9ca42b04b6be80cf4faa3e5bca8bdc1b5166
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.107107526Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.11187073Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.111914497Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.111935477Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.120456416Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.120495374Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.12124845Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.136098512Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.136141147Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.136162332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.144687152Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.144732527Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c4a140840e884       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   0b70b03cc667a       storage-provisioner                                    kube-system
	820ec548d452c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   fa521584b2941       dashboard-metrics-scraper-6ffb444bf9-j92j9             kubernetes-dashboard
	f46a136c47f72       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   d0482f9d32563       kubernetes-dashboard-855c9754f9-9r89r                  kubernetes-dashboard
	978f68cdd75cb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   1c017f19cc54b       coredns-66bc5c9577-kdh8n                               kube-system
	71ac6e6796c03       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   0b70b03cc667a       storage-provisioner                                    kube-system
	0f79920804108       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   c75bd385eabab       kindnet-4g2sr                                          kube-system
	60afe48cceae7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   ac2dffc111b35       busybox                                                default
	5fd128cd31c50       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   53db3265a91e1       kube-proxy-q6lq4                                       kube-system
	5a6629b69c5e0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   22d99a23ab2dc       kube-apiserver-default-k8s-diff-port-559701            kube-system
	f420a3f656763       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c2e73af4d0e07       kube-scheduler-default-k8s-diff-port-559701            kube-system
	24e3b3c58fa5d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   f60f36115b5bb       etcd-default-k8s-diff-port-559701                      kube-system
	1d71c5df1fe3f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   df89262b52ac6       kube-controller-manager-default-k8s-diff-port-559701   kube-system
	
	
	==> coredns [978f68cdd75cb6ba1a4707d81fabaa6706e4b0e8b6fcaace8452d6080183c3ac] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44987 - 15547 "HINFO IN 5229044937764672186.5573208851706192662. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021984028s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-559701
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-559701
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-559701
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:24:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-559701
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:26:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:26:51 +0000   Thu, 20 Nov 2025 22:24:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:26:51 +0000   Thu, 20 Nov 2025 22:24:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:26:51 +0000   Thu, 20 Nov 2025 22:24:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:26:51 +0000   Thu, 20 Nov 2025 22:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-559701
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                e075c442-07ea-4bfb-b4b4-14ea51a97fa9
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-kdh8n                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-default-k8s-diff-port-559701                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-4g2sr                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-559701             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-559701    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-q6lq4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-559701             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-j92j9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9r89r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   Starting                 2m39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s                  node-controller  Node default-k8s-diff-port-559701 event: Registered Node default-k8s-diff-port-559701 in Controller
	  Normal   NodeReady                100s                   kubelet          Node default-k8s-diff-port-559701 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node default-k8s-diff-port-559701 event: Registered Node default-k8s-diff-port-559701 in Controller
	
	
	==> dmesg <==
	[Nov20 22:02] overlayfs: idmapped layers are currently not supported
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [24e3b3c58fa5dc48ddc4f9d5406e8ee808c9a30a31a0509d6f7eacbc5ebb4a41] <==
	{"level":"warn","ts":"2025-11-20T22:25:59.227931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.250653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.276206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.291218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.316242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.329614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.353509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.374655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.401582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.419573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.471033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.487636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.504739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.526916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.543968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.559378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.575228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.593071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.609756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.628858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.643903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.678174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.698386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.708528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.775905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54484","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:26:58 up  5:09,  0 user,  load average: 4.17, 3.51, 2.76
	Linux default-k8s-diff-port-559701 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0f799208041082e605140f3d4caab1ef18ec66f7efd50760890b4593e204bb88] <==
	I1120 22:26:01.820660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:26:01.903467       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:26:01.903696       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:26:01.903740       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:26:01.903781       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:26:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:26:02.112671       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:26:02.117790       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:26:02.117821       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:26:02.117960       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:26:32.107949       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:26:32.119496       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:26:32.120853       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 22:26:32.142293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1120 22:26:33.818748       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:26:33.818784       1 metrics.go:72] Registering metrics
	I1120 22:26:33.818872       1 controller.go:711] "Syncing nftables rules"
	I1120 22:26:42.106705       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:26:42.106795       1 main.go:301] handling current node
	I1120 22:26:52.107297       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:26:52.107344       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a6629b69c5e0d8e000cdd414ba97d90c5b7a7e59914d41eb655c3968aad1a0c] <==
	I1120 22:26:00.837786       1 aggregator.go:171] initial CRD sync complete...
	I1120 22:26:00.837794       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 22:26:00.837801       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:26:00.837807       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:26:00.877821       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 22:26:00.877852       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 22:26:00.883139       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:26:00.891803       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1120 22:26:00.892234       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 22:26:00.892478       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 22:26:00.897392       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:26:00.902890       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:26:01.084188       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:26:01.170687       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1120 22:26:01.201544       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 22:26:01.604649       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:26:02.199745       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:26:02.393638       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:26:02.479767       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:26:02.506797       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:26:02.619334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.162.93"}
	I1120 22:26:02.638347       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.72.253"}
	I1120 22:26:04.160733       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:26:04.577676       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:26:04.673660       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1d71c5df1fe3fb7bc49ab400af58339d6f0dbb2f7f20480e8fca0999b681c9bb] <==
	I1120 22:26:04.120346       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-559701"
	I1120 22:26:04.120395       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 22:26:04.121081       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 22:26:04.131172       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:26:04.136949       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 22:26:04.136993       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:26:04.143060       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:26:04.146744       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 22:26:04.154119       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 22:26:04.157911       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 22:26:04.158197       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:26:04.172051       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:26:04.172118       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 22:26:04.172601       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 22:26:04.185232       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:26:04.198535       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:26:04.198637       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 22:26:04.198726       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 22:26:04.200238       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 22:26:04.203352       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 22:26:04.210581       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 22:26:04.220036       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:26:04.220066       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:26:04.220072       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:26:04.233589       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5fd128cd31c50bca5a1687270aadf6c6a1bf19093abae39c49f64e02a3647fba] <==
	I1120 22:26:01.509252       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:26:01.822461       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:26:01.926415       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:26:01.943322       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:26:01.986419       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:26:02.165929       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:26:02.165982       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:26:02.171156       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:26:02.181265       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:26:02.182875       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:26:02.184437       1 config.go:200] "Starting service config controller"
	I1120 22:26:02.184449       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:26:02.184469       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:26:02.184473       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:26:02.184484       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:26:02.184488       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:26:02.185128       1 config.go:309] "Starting node config controller"
	I1120 22:26:02.185136       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:26:02.185143       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:26:02.385487       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:26:02.385525       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:26:02.385579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f420a3f656763afb77ad4591b661d794b5ba1e728742d94c9f2a35b5d946b367] <==
	I1120 22:25:58.459045       1 serving.go:386] Generated self-signed cert in-memory
	W1120 22:26:00.762301       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 22:26:00.762348       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 22:26:00.762358       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 22:26:00.762365       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 22:26:00.932300       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:26:00.932332       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:26:00.970034       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:26:00.970177       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:26:00.970200       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:26:00.970218       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 22:26:01.070722       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:26:04 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:04.500955     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8a5c5747-a052-47dd-8fb2-01d08cd64913-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9r89r\" (UID: \"8a5c5747-a052-47dd-8fb2-01d08cd64913\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9r89r"
	Nov 20 22:26:05 default-k8s-diff-port-559701 kubelet[781]: W1120 22:26:05.012990     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/crio-fa521584b2941e0b08e56df4760e1ad83665cf79545c792c2d8e95d4485d6050 WatchSource:0}: Error finding container fa521584b2941e0b08e56df4760e1ad83665cf79545c792c2d8e95d4485d6050: Status 404 returned error can't find the container with id fa521584b2941e0b08e56df4760e1ad83665cf79545c792c2d8e95d4485d6050
	Nov 20 22:26:05 default-k8s-diff-port-559701 kubelet[781]: W1120 22:26:05.036305     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/crio-d0482f9d325633287d65e1373292d7211ea2ab0e1d9e7153a81a4abe7a5939be WatchSource:0}: Error finding container d0482f9d325633287d65e1373292d7211ea2ab0e1d9e7153a81a4abe7a5939be: Status 404 returned error can't find the container with id d0482f9d325633287d65e1373292d7211ea2ab0e1d9e7153a81a4abe7a5939be
	Nov 20 22:26:10 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:10.295669     781 scope.go:117] "RemoveContainer" containerID="0e87091e8652d91efe0182b6b23867ffb58d0d1a7af7b653e5c2e470e577d697"
	Nov 20 22:26:10 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:10.751583     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 22:26:11 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:11.300022     781 scope.go:117] "RemoveContainer" containerID="0e87091e8652d91efe0182b6b23867ffb58d0d1a7af7b653e5c2e470e577d697"
	Nov 20 22:26:11 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:11.300290     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:11 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:11.300447     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:12 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:12.303912     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:12 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:12.304123     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:14 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:14.910693     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:14 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:14.910886     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:29.127069     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:29.348985     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:29.349233     781 scope.go:117] "RemoveContainer" containerID="820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:29.349408     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:29.385483     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9r89r" podStartSLOduration=16.677151139 podStartE2EDuration="25.385465317s" podCreationTimestamp="2025-11-20 22:26:04 +0000 UTC" firstStartedPulling="2025-11-20 22:26:05.043396491 +0000 UTC m=+11.288865134" lastFinishedPulling="2025-11-20 22:26:13.751710669 +0000 UTC m=+19.997179312" observedRunningTime="2025-11-20 22:26:14.331230844 +0000 UTC m=+20.576699503" watchObservedRunningTime="2025-11-20 22:26:29.385465317 +0000 UTC m=+35.630933960"
	Nov 20 22:26:32 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:32.359784     781 scope.go:117] "RemoveContainer" containerID="71ac6e6796c03c7fb8d831ed11b785c9b2c4a26e730aadb906054e37e9d71d56"
	Nov 20 22:26:34 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:34.910188     781 scope.go:117] "RemoveContainer" containerID="820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	Nov 20 22:26:34 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:34.910374     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:47 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:47.126357     781 scope.go:117] "RemoveContainer" containerID="820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	Nov 20 22:26:47 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:47.126725     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:54 default-k8s-diff-port-559701 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:26:55 default-k8s-diff-port-559701 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:26:55 default-k8s-diff-port-559701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f46a136c47f729995c7015f57754a197f8024a568665f2ed05d801a225a32dcb] <==
	2025/11/20 22:26:13 Starting overwatch
	2025/11/20 22:26:13 Using namespace: kubernetes-dashboard
	2025/11/20 22:26:13 Using in-cluster config to connect to apiserver
	2025/11/20 22:26:13 Using secret token for csrf signing
	2025/11/20 22:26:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 22:26:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 22:26:13 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 22:26:13 Generating JWE encryption key
	2025/11/20 22:26:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 22:26:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 22:26:14 Initializing JWE encryption key from synchronized object
	2025/11/20 22:26:14 Creating in-cluster Sidecar client
	2025/11/20 22:26:14 Serving insecurely on HTTP port: 9090
	2025/11/20 22:26:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:26:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [71ac6e6796c03c7fb8d831ed11b785c9b2c4a26e730aadb906054e37e9d71d56] <==
	I1120 22:26:01.861592       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 22:26:31.875216       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c] <==
	I1120 22:26:32.464781       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 22:26:32.489184       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:26:32.495999       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 22:26:32.501656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:35.975548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:40.236229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:43.838355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:46.891393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:49.913821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:49.921300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:26:49.921468       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:26:49.921551       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69a307c5-854a-4ffe-8ac7-a9f82ffd8d45", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-559701_e94f15ff-fbc7-4f06-9a7b-3e31cb9dbf3d became leader
	I1120 22:26:49.921627       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-559701_e94f15ff-fbc7-4f06-9a7b-3e31cb9dbf3d!
	W1120 22:26:49.926577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:49.930326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:26:50.021819       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-559701_e94f15ff-fbc7-4f06-9a7b-3e31cb9dbf3d!
	W1120 22:26:51.934567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:51.942109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:53.945893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:53.953556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:55.956642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:55.961051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:57.964373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:57.971867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701: exit status 2 (378.39199ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-559701 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-559701
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-559701:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225",
	        "Created": "2025-11-20T22:23:58.497614948Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1031845,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:25:46.428971384Z",
	            "FinishedAt": "2025-11-20T22:25:45.579925223Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/hostname",
	        "HostsPath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/hosts",
	        "LogPath": "/var/lib/docker/containers/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225-json.log",
	        "Name": "/default-k8s-diff-port-559701",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-559701:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-559701",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225",
	                "LowerDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2aad2dab78149bd367f1bdbf8adc2a455caf53e77a4f0d918198dcb6133d1cd1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-559701",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-559701/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-559701",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-559701",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-559701",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "665355898c11ac8f708d14bf7a2c51ea90e6420bf85e66ceab32f8ef9822d902",
	            "SandboxKey": "/var/run/docker/netns/665355898c11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34177"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34179"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34180"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-559701": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:f4:05:b4:50:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f87df3640a96e74282a6fa8d1f119c94634bd199cb6db600d19a35606adfa81c",
	                    "EndpointID": "79fc9539923ae76d6f8b6a0f42b6216206a984cb39ae8e4751cfb47183aea6cc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-559701",
	                        "dec634595af0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701: exit status 2 (411.180334ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-559701 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-559701 logs -n 25: (1.606360195s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-961311 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ delete  │ -p cert-options-961311                                                                                                                                                                                                                        │ cert-options-961311          │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:21 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:21 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-443192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │                     │
	│ stop    │ -p old-k8s-version-443192 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-443192 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:22 UTC │
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:23 UTC │
	│ image   │ old-k8s-version-443192 image list --format=json                                                                                                                                                                                               │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ pause   │ -p old-k8s-version-443192 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │                     │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:24 UTC │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:25 UTC │
	│ delete  │ -p cert-expiration-420078                                                                                                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:24 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-559701 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ stop    │ -p embed-certs-270206 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-270206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ image   │ default-k8s-diff-port-559701 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:26:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:26:18.408688 1034660 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:26:18.409041 1034660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:26:18.409082 1034660 out.go:374] Setting ErrFile to fd 2...
	I1120 22:26:18.409128 1034660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:26:18.409586 1034660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:26:18.410171 1034660 out.go:368] Setting JSON to false
	I1120 22:26:18.411537 1034660 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18504,"bootTime":1763659075,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:26:18.411667 1034660 start.go:143] virtualization:  
	I1120 22:26:18.415065 1034660 out.go:179] * [embed-certs-270206] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:26:18.419117 1034660 notify.go:221] Checking for updates...
	I1120 22:26:18.419734 1034660 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:26:18.423386 1034660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:26:18.426519 1034660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:26:18.429649 1034660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:26:18.433069 1034660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:26:18.436148 1034660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:26:18.439729 1034660 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:26:18.440300 1034660 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:26:18.464251 1034660 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:26:18.464484 1034660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:26:18.533255 1034660 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:26:18.523527362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:26:18.533370 1034660 docker.go:319] overlay module found
	I1120 22:26:18.536517 1034660 out.go:179] * Using the docker driver based on existing profile
	I1120 22:26:18.539512 1034660 start.go:309] selected driver: docker
	I1120 22:26:18.539583 1034660 start.go:930] validating driver "docker" against &{Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:26:18.539691 1034660 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:26:18.540505 1034660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:26:18.596503 1034660 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:26:18.587072606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:26:18.596843 1034660 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:26:18.596879 1034660 cni.go:84] Creating CNI manager for ""
	I1120 22:26:18.596936 1034660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:26:18.596977 1034660 start.go:353] cluster config:
	{Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:26:18.600130 1034660 out.go:179] * Starting "embed-certs-270206" primary control-plane node in "embed-certs-270206" cluster
	I1120 22:26:18.603059 1034660 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:26:18.606139 1034660 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:26:18.609042 1034660 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:26:18.609091 1034660 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:26:18.609102 1034660 cache.go:65] Caching tarball of preloaded images
	I1120 22:26:18.609461 1034660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:26:18.609685 1034660 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:26:18.609697 1034660 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:26:18.609825 1034660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/config.json ...
	I1120 22:26:18.633146 1034660 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:26:18.633169 1034660 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:26:18.633188 1034660 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:26:18.633212 1034660 start.go:360] acquireMachinesLock for embed-certs-270206: {Name:mk80d30c009178e97eae54d0fb9c0edcaf285b3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:26:18.633279 1034660 start.go:364] duration metric: took 46.441µs to acquireMachinesLock for "embed-certs-270206"
	I1120 22:26:18.633307 1034660 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:26:18.633317 1034660 fix.go:54] fixHost starting: 
	I1120 22:26:18.633565 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:18.650560 1034660 fix.go:112] recreateIfNeeded on embed-certs-270206: state=Stopped err=<nil>
	W1120 22:26:18.650594 1034660 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 22:26:16.501734 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:18.503808 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:21.001158 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	I1120 22:26:18.653721 1034660 out.go:252] * Restarting existing docker container for "embed-certs-270206" ...
	I1120 22:26:18.653821 1034660 cli_runner.go:164] Run: docker start embed-certs-270206
	I1120 22:26:18.931397 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:18.953990 1034660 kic.go:430] container "embed-certs-270206" state is running.
	I1120 22:26:18.954580 1034660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-270206
	I1120 22:26:18.976516 1034660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/config.json ...
	I1120 22:26:18.976745 1034660 machine.go:94] provisionDockerMachine start ...
	I1120 22:26:18.976811 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:18.998881 1034660 main.go:143] libmachine: Using SSH client type: native
	I1120 22:26:18.999295 1034660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1120 22:26:18.999313 1034660 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:26:19.000381 1034660 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:26:22.146672 1034660 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-270206
	
	I1120 22:26:22.146697 1034660 ubuntu.go:182] provisioning hostname "embed-certs-270206"
	I1120 22:26:22.146764 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:22.164994 1034660 main.go:143] libmachine: Using SSH client type: native
	I1120 22:26:22.165347 1034660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1120 22:26:22.165369 1034660 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-270206 && echo "embed-certs-270206" | sudo tee /etc/hostname
	I1120 22:26:22.330820 1034660 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-270206
	
	I1120 22:26:22.331006 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:22.351089 1034660 main.go:143] libmachine: Using SSH client type: native
	I1120 22:26:22.351428 1034660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1120 22:26:22.351451 1034660 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-270206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-270206/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-270206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:26:22.495469 1034660 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:26:22.495500 1034660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:26:22.495534 1034660 ubuntu.go:190] setting up certificates
	I1120 22:26:22.495544 1034660 provision.go:84] configureAuth start
	I1120 22:26:22.495621 1034660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-270206
	I1120 22:26:22.514786 1034660 provision.go:143] copyHostCerts
	I1120 22:26:22.514862 1034660 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:26:22.514881 1034660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:26:22.514956 1034660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:26:22.515099 1034660 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:26:22.515112 1034660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:26:22.515141 1034660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:26:22.515197 1034660 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:26:22.515206 1034660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:26:22.515231 1034660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:26:22.515289 1034660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.embed-certs-270206 san=[127.0.0.1 192.168.76.2 embed-certs-270206 localhost minikube]
	I1120 22:26:22.719743 1034660 provision.go:177] copyRemoteCerts
	I1120 22:26:22.719813 1034660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:26:22.719862 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:22.738478 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:22.838803 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1120 22:26:22.857736 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:26:22.876790 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:26:22.895199 1034660 provision.go:87] duration metric: took 399.625611ms to configureAuth
	I1120 22:26:22.895227 1034660 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:26:22.895472 1034660 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:26:22.895584 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:22.916967 1034660 main.go:143] libmachine: Using SSH client type: native
	I1120 22:26:22.917291 1034660 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1120 22:26:22.917309 1034660 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:26:23.270665 1034660 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:26:23.270693 1034660 machine.go:97] duration metric: took 4.293934879s to provisionDockerMachine
	I1120 22:26:23.270704 1034660 start.go:293] postStartSetup for "embed-certs-270206" (driver="docker")
	I1120 22:26:23.270715 1034660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:26:23.270777 1034660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:26:23.270822 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:23.290153 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:23.391221 1034660 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:26:23.394821 1034660 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:26:23.394848 1034660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:26:23.394858 1034660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:26:23.394911 1034660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:26:23.395015 1034660 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:26:23.395119 1034660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:26:23.402729 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:26:23.421801 1034660 start.go:296] duration metric: took 151.081098ms for postStartSetup
	I1120 22:26:23.421905 1034660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:26:23.421967 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:23.441204 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:23.540040 1034660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:26:23.544943 1034660 fix.go:56] duration metric: took 4.911618702s for fixHost
	I1120 22:26:23.544969 1034660 start.go:83] releasing machines lock for "embed-certs-270206", held for 4.911673382s
	I1120 22:26:23.545039 1034660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-270206
	I1120 22:26:23.561683 1034660 ssh_runner.go:195] Run: cat /version.json
	I1120 22:26:23.561748 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:23.562007 1034660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:26:23.562070 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:23.586831 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:23.603067 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:23.690675 1034660 ssh_runner.go:195] Run: systemctl --version
	I1120 22:26:23.796657 1034660 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:26:23.842109 1034660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:26:23.847460 1034660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:26:23.847545 1034660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:26:23.855671 1034660 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:26:23.855699 1034660 start.go:496] detecting cgroup driver to use...
	I1120 22:26:23.855731 1034660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:26:23.855793 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:26:23.872394 1034660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:26:23.886153 1034660 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:26:23.886217 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:26:23.904928 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:26:23.918902 1034660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:26:24.037707 1034660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:26:24.167873 1034660 docker.go:234] disabling docker service ...
	I1120 22:26:24.167969 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:26:24.183405 1034660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:26:24.196730 1034660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:26:24.326592 1034660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:26:24.449315 1034660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:26:24.465334 1034660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:26:24.480321 1034660 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:26:24.480442 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.489639 1034660 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:26:24.489730 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.504294 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.513680 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.522754 1034660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:26:24.531375 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.541011 1034660 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.550118 1034660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:26:24.558919 1034660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:26:24.567055 1034660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:26:24.574540 1034660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:26:24.693089 1034660 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:26:24.878346 1034660 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:26:24.878449 1034660 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:26:24.882367 1034660 start.go:564] Will wait 60s for crictl version
	I1120 22:26:24.882458 1034660 ssh_runner.go:195] Run: which crictl
	I1120 22:26:24.886343 1034660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:26:24.919047 1034660 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:26:24.919135 1034660 ssh_runner.go:195] Run: crio --version
	I1120 22:26:24.951928 1034660 ssh_runner.go:195] Run: crio --version
	I1120 22:26:24.987561 1034660 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1120 22:26:23.001361 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:25.011753 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	I1120 22:26:24.990401 1034660 cli_runner.go:164] Run: docker network inspect embed-certs-270206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:26:25.014554 1034660 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 22:26:25.019094 1034660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:26:25.031754 1034660 kubeadm.go:884] updating cluster {Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:26:25.031885 1034660 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:26:25.031941 1034660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:26:25.071882 1034660 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:26:25.071909 1034660 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:26:25.071974 1034660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:26:25.100054 1034660 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:26:25.100080 1034660 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:26:25.100088 1034660 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1120 22:26:25.100204 1034660 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-270206 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:26:25.100314 1034660 ssh_runner.go:195] Run: crio config
	I1120 22:26:25.157253 1034660 cni.go:84] Creating CNI manager for ""
	I1120 22:26:25.157279 1034660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:26:25.157323 1034660 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:26:25.157352 1034660 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-270206 NodeName:embed-certs-270206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:26:25.157494 1034660 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-270206"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:26:25.157568 1034660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:26:25.165992 1034660 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:26:25.166105 1034660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:26:25.174403 1034660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 22:26:25.187462 1034660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:26:25.200386 1034660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1120 22:26:25.213473 1034660 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:26:25.217650 1034660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:26:25.227773 1034660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:26:25.356302 1034660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:26:25.375112 1034660 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206 for IP: 192.168.76.2
	I1120 22:26:25.375139 1034660 certs.go:195] generating shared ca certs ...
	I1120 22:26:25.375156 1034660 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:26:25.375310 1034660 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:26:25.375365 1034660 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:26:25.375376 1034660 certs.go:257] generating profile certs ...
	I1120 22:26:25.375482 1034660 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/client.key
	I1120 22:26:25.375556 1034660 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key.ed27b386
	I1120 22:26:25.375607 1034660 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.key
	I1120 22:26:25.375723 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:26:25.375759 1034660 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:26:25.375772 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:26:25.375808 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:26:25.375835 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:26:25.375862 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:26:25.375906 1034660 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:26:25.377008 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:26:25.406215 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:26:25.430507 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:26:25.454177 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:26:25.479535 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1120 22:26:25.511320 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:26:25.534139 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:26:25.556171 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/embed-certs-270206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 22:26:25.590809 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:26:25.619083 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:26:25.639317 1034660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:26:25.664563 1034660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:26:25.679328 1034660 ssh_runner.go:195] Run: openssl version
	I1120 22:26:25.686212 1034660 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:26:25.694121 1034660 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:26:25.702221 1034660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:26:25.706144 1034660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:26:25.706234 1034660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:26:25.749072 1034660 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:26:25.756769 1034660 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:26:25.764392 1034660 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:26:25.772363 1034660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:26:25.776246 1034660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:26:25.776309 1034660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:26:25.817560 1034660 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:26:25.826570 1034660 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:26:25.834237 1034660 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:26:25.841942 1034660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:26:25.845579 1034660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:26:25.845665 1034660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:26:25.886928 1034660 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:26:25.894502 1034660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:26:25.901444 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:26:25.946530 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:26:25.989968 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:26:26.038267 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:26:26.100343 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:26:26.169854 1034660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:26:26.257232 1034660 kubeadm.go:401] StartCluster: {Name:embed-certs-270206 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-270206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:26:26.257327 1034660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:26:26.257396 1034660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:26:26.301229 1034660 cri.go:89] found id: "3b1fee8d5af72e2b534ec4e7ad37bec76a977b37fb8d8cd98bdabfae224ac824"
	I1120 22:26:26.301252 1034660 cri.go:89] found id: "0e18c657e0d1a0e87220cc83c18f4b5c5413a4677fa9b2ca5752a5267bead913"
	I1120 22:26:26.301258 1034660 cri.go:89] found id: "ea0c8d065057f3665d6ec3035564aee5d8e6850f708052453e6159677f28f712"
	I1120 22:26:26.301262 1034660 cri.go:89] found id: "a5edded9820b755f34e9b6d2593a3430839d72f1039a85a103ebda708afb8677"
	I1120 22:26:26.301273 1034660 cri.go:89] found id: ""
	I1120 22:26:26.301322 1034660 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:26:26.315085 1034660 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:26:26Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:26:26.315161 1034660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:26:26.326609 1034660 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:26:26.326637 1034660 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:26:26.326690 1034660 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:26:26.341215 1034660 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:26:26.341784 1034660 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-270206" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:26:26.342071 1034660 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-270206" cluster setting kubeconfig missing "embed-certs-270206" context setting]
	I1120 22:26:26.342520 1034660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:26:26.344233 1034660 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:26:26.359479 1034660 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1120 22:26:26.359518 1034660 kubeadm.go:602] duration metric: took 32.872948ms to restartPrimaryControlPlane
	I1120 22:26:26.359527 1034660 kubeadm.go:403] duration metric: took 102.307596ms to StartCluster
	I1120 22:26:26.359543 1034660 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:26:26.359635 1034660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:26:26.360899 1034660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:26:26.361350 1034660 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:26:26.361415 1034660 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:26:26.361471 1034660 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:26:26.361542 1034660 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-270206"
	I1120 22:26:26.361561 1034660 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-270206"
	W1120 22:26:26.361581 1034660 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:26:26.361605 1034660 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:26:26.362067 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:26.362555 1034660 addons.go:70] Setting default-storageclass=true in profile "embed-certs-270206"
	I1120 22:26:26.362585 1034660 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-270206"
	I1120 22:26:26.362653 1034660 addons.go:70] Setting dashboard=true in profile "embed-certs-270206"
	I1120 22:26:26.362687 1034660 addons.go:239] Setting addon dashboard=true in "embed-certs-270206"
	W1120 22:26:26.362706 1034660 addons.go:248] addon dashboard should already be in state true
	I1120 22:26:26.362758 1034660 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:26:26.362876 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:26.363359 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:26.376868 1034660 out.go:179] * Verifying Kubernetes components...
	I1120 22:26:26.385839 1034660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:26:26.406633 1034660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:26:26.409576 1034660 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:26:26.409599 1034660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:26:26.409665 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:26.412456 1034660 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:26:26.415476 1034660 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:26:26.418889 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:26:26.418923 1034660 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:26:26.419086 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:26.425605 1034660 addons.go:239] Setting addon default-storageclass=true in "embed-certs-270206"
	W1120 22:26:26.425630 1034660 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:26:26.425653 1034660 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:26:26.426077 1034660 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:26:26.471105 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:26.472840 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:26.483255 1034660 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:26:26.483288 1034660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:26:26.483357 1034660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:26:26.515215 1034660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:26:26.711841 1034660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:26:26.727990 1034660 node_ready.go:35] waiting up to 6m0s for node "embed-certs-270206" to be "Ready" ...
	I1120 22:26:26.777156 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:26:26.777232 1034660 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:26:26.793017 1034660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:26:26.819032 1034660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:26:26.856415 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:26:26.856442 1034660 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:26:26.889571 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:26:26.889598 1034660 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:26:26.976490 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:26:26.976516 1034660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:26:27.081650 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:26:27.081678 1034660 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:26:27.106868 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:26:27.106894 1034660 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:26:27.137962 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:26:27.137988 1034660 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:26:27.158997 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:26:27.159020 1034660 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:26:27.183739 1034660 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:26:27.183769 1034660 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:26:27.214417 1034660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1120 22:26:27.501892 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:30.000943 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	I1120 22:26:31.115458 1034660 node_ready.go:49] node "embed-certs-270206" is "Ready"
	I1120 22:26:31.115491 1034660 node_ready.go:38] duration metric: took 4.387421846s for node "embed-certs-270206" to be "Ready" ...
	I1120 22:26:31.115506 1034660 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:26:31.115572 1034660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:26:33.195427 1034660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.4023189s)
	I1120 22:26:33.195493 1034660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.376390789s)
	I1120 22:26:33.308282 1034660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.093818286s)
	I1120 22:26:33.308571 1034660 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.192980952s)
	I1120 22:26:33.308638 1034660 api_server.go:72] duration metric: took 6.947194099s to wait for apiserver process to appear ...
	I1120 22:26:33.308664 1034660 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:26:33.308711 1034660 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1120 22:26:33.311756 1034660 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-270206 addons enable metrics-server
	
	I1120 22:26:33.314616 1034660 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1120 22:26:33.317446 1034660 addons.go:515] duration metric: took 6.955962319s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1120 22:26:33.323552 1034660 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1120 22:26:33.325421 1034660 api_server.go:141] control plane version: v1.34.1
	I1120 22:26:33.325452 1034660 api_server.go:131] duration metric: took 16.767822ms to wait for apiserver health ...
	I1120 22:26:33.325463 1034660 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:26:33.333128 1034660 system_pods.go:59] 8 kube-system pods found
	I1120 22:26:33.333164 1034660 system_pods.go:61] "coredns-66bc5c9577-c5cg5" [42c2a518-d0e5-4c59-9710-7b624f63c38c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:26:33.333174 1034660 system_pods.go:61] "etcd-embed-certs-270206" [5e65bc97-d5f1-43e1-98a3-e9fbf1523362] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:26:33.333181 1034660 system_pods.go:61] "kindnet-9sqjv" [1d0771a4-278b-44eb-a563-ab815df51728] Running
	I1120 22:26:33.333188 1034660 system_pods.go:61] "kube-apiserver-embed-certs-270206" [86e699be-1798-428d-a223-8682e8ddfd6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:26:33.333200 1034660 system_pods.go:61] "kube-controller-manager-embed-certs-270206" [afe1bea4-7588-46af-8287-363bad438880] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:26:33.333208 1034660 system_pods.go:61] "kube-proxy-9d84b" [372ec000-a084-43d1-ac94-5cb64204ba40] Running
	I1120 22:26:33.333215 1034660 system_pods.go:61] "kube-scheduler-embed-certs-270206" [ab91a905-69f6-42ce-98a7-b166339a6d6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:26:33.333228 1034660 system_pods.go:61] "storage-provisioner" [276e2ed3-8832-46cb-baf7-6accd2f37e27] Running
	I1120 22:26:33.333236 1034660 system_pods.go:74] duration metric: took 7.767747ms to wait for pod list to return data ...
	I1120 22:26:33.333248 1034660 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:26:33.338696 1034660 default_sa.go:45] found service account: "default"
	I1120 22:26:33.338721 1034660 default_sa.go:55] duration metric: took 5.466548ms for default service account to be created ...
	I1120 22:26:33.338730 1034660 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:26:33.344550 1034660 system_pods.go:86] 8 kube-system pods found
	I1120 22:26:33.344588 1034660 system_pods.go:89] "coredns-66bc5c9577-c5cg5" [42c2a518-d0e5-4c59-9710-7b624f63c38c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:26:33.344597 1034660 system_pods.go:89] "etcd-embed-certs-270206" [5e65bc97-d5f1-43e1-98a3-e9fbf1523362] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:26:33.344603 1034660 system_pods.go:89] "kindnet-9sqjv" [1d0771a4-278b-44eb-a563-ab815df51728] Running
	I1120 22:26:33.344610 1034660 system_pods.go:89] "kube-apiserver-embed-certs-270206" [86e699be-1798-428d-a223-8682e8ddfd6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:26:33.344616 1034660 system_pods.go:89] "kube-controller-manager-embed-certs-270206" [afe1bea4-7588-46af-8287-363bad438880] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:26:33.344621 1034660 system_pods.go:89] "kube-proxy-9d84b" [372ec000-a084-43d1-ac94-5cb64204ba40] Running
	I1120 22:26:33.344630 1034660 system_pods.go:89] "kube-scheduler-embed-certs-270206" [ab91a905-69f6-42ce-98a7-b166339a6d6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:26:33.344634 1034660 system_pods.go:89] "storage-provisioner" [276e2ed3-8832-46cb-baf7-6accd2f37e27] Running
	I1120 22:26:33.344641 1034660 system_pods.go:126] duration metric: took 5.905028ms to wait for k8s-apps to be running ...
	I1120 22:26:33.344652 1034660 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:26:33.344732 1034660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:26:33.384503 1034660 system_svc.go:56] duration metric: took 39.826555ms WaitForService to wait for kubelet
	I1120 22:26:33.384592 1034660 kubeadm.go:587] duration metric: took 7.023146534s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:26:33.384626 1034660 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:26:33.389111 1034660 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:26:33.389203 1034660 node_conditions.go:123] node cpu capacity is 2
	I1120 22:26:33.389235 1034660 node_conditions.go:105] duration metric: took 4.585593ms to run NodePressure ...
	I1120 22:26:33.389290 1034660 start.go:242] waiting for startup goroutines ...
	I1120 22:26:33.389316 1034660 start.go:247] waiting for cluster config update ...
	I1120 22:26:33.389356 1034660 start.go:256] writing updated cluster config ...
	I1120 22:26:33.389772 1034660 ssh_runner.go:195] Run: rm -f paused
	I1120 22:26:33.399635 1034660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:26:33.404580 1034660 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5cg5" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:26:32.001892 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:34.502529 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:35.410870 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:37.911491 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:37.002076 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	W1120 22:26:39.502828 1031720 pod_ready.go:104] pod "coredns-66bc5c9577-kdh8n" is not "Ready", error: <nil>
	I1120 22:26:41.002386 1031720 pod_ready.go:94] pod "coredns-66bc5c9577-kdh8n" is "Ready"
	I1120 22:26:41.002416 1031720 pod_ready.go:86] duration metric: took 38.006803069s for pod "coredns-66bc5c9577-kdh8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.006914 1031720 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.012729 1031720 pod_ready.go:94] pod "etcd-default-k8s-diff-port-559701" is "Ready"
	I1120 22:26:41.012757 1031720 pod_ready.go:86] duration metric: took 5.812932ms for pod "etcd-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.016637 1031720 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.025333 1031720 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-559701" is "Ready"
	I1120 22:26:41.025360 1031720 pod_ready.go:86] duration metric: took 8.695726ms for pod "kube-apiserver-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.028104 1031720 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.200526 1031720 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-559701" is "Ready"
	I1120 22:26:41.200555 1031720 pod_ready.go:86] duration metric: took 172.424404ms for pod "kube-controller-manager-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.400506 1031720 pod_ready.go:83] waiting for pod "kube-proxy-q6lq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:41.799771 1031720 pod_ready.go:94] pod "kube-proxy-q6lq4" is "Ready"
	I1120 22:26:41.799799 1031720 pod_ready.go:86] duration metric: took 399.266664ms for pod "kube-proxy-q6lq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:42.000368 1031720 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:42.401419 1031720 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-559701" is "Ready"
	I1120 22:26:42.401463 1031720 pod_ready.go:86] duration metric: took 401.022173ms for pod "kube-scheduler-default-k8s-diff-port-559701" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:26:42.401477 1031720 pod_ready.go:40] duration metric: took 39.413168884s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:26:42.498179 1031720 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:26:42.502654 1031720 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-559701" cluster and "default" namespace by default
	W1120 22:26:39.911726 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:42.413357 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:44.909906 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:46.910678 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:49.411670 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:51.909913 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:53.914252 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	W1120 22:26:56.410012 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 20 22:26:29 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:29.37491767Z" level=info msg="Removed container 7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9/dashboard-metrics-scraper" id=3747d358-115f-438b-9be8-f2ab09a246ac name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:26:31 default-k8s-diff-port-559701 conmon[1174]: conmon 71ac6e6796c03c7fb8d8 <ninfo>: container 1185 exited with status 1
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.360210701Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9c18b670-aee9-4ef0-9348-a3c85b7e3a5b name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.361572015Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9912e82b-e642-46ab-8de4-50b913b65e0d name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.362582129Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=243b3a44-3e97-4841-8627-5ed4368509ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.362690011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.379598175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.380006845Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/773439562dfa0ec3dd207b72bc565c572003f0362cc9d418bbb57eb9f2e52906/merged/etc/passwd: no such file or directory"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.380113185Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/773439562dfa0ec3dd207b72bc565c572003f0362cc9d418bbb57eb9f2e52906/merged/etc/group: no such file or directory"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.380495483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.425067223Z" level=info msg="Created container c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c: kube-system/storage-provisioner/storage-provisioner" id=243b3a44-3e97-4841-8627-5ed4368509ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.426349086Z" level=info msg="Starting container: c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c" id=3f87d98c-8ece-4912-9f8f-1a15ad331426 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:26:32 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:32.428634949Z" level=info msg="Started container" PID=1644 containerID=c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c description=kube-system/storage-provisioner/storage-provisioner id=3f87d98c-8ece-4912-9f8f-1a15ad331426 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b70b03cc667abe2e929031055ec9ca42b04b6be80cf4faa3e5bca8bdc1b5166
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.107107526Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.11187073Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.111914497Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.111935477Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.120456416Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.120495374Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.12124845Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.136098512Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.136141147Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.136162332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.144687152Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:26:42 default-k8s-diff-port-559701 crio[652]: time="2025-11-20T22:26:42.144732527Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c4a140840e884       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   0b70b03cc667a       storage-provisioner                                    kube-system
	820ec548d452c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   fa521584b2941       dashboard-metrics-scraper-6ffb444bf9-j92j9             kubernetes-dashboard
	f46a136c47f72       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   d0482f9d32563       kubernetes-dashboard-855c9754f9-9r89r                  kubernetes-dashboard
	978f68cdd75cb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   1c017f19cc54b       coredns-66bc5c9577-kdh8n                               kube-system
	71ac6e6796c03       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   0b70b03cc667a       storage-provisioner                                    kube-system
	0f79920804108       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   c75bd385eabab       kindnet-4g2sr                                          kube-system
	60afe48cceae7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   ac2dffc111b35       busybox                                                default
	5fd128cd31c50       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   53db3265a91e1       kube-proxy-q6lq4                                       kube-system
	5a6629b69c5e0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   22d99a23ab2dc       kube-apiserver-default-k8s-diff-port-559701            kube-system
	f420a3f656763       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c2e73af4d0e07       kube-scheduler-default-k8s-diff-port-559701            kube-system
	24e3b3c58fa5d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   f60f36115b5bb       etcd-default-k8s-diff-port-559701                      kube-system
	1d71c5df1fe3f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   df89262b52ac6       kube-controller-manager-default-k8s-diff-port-559701   kube-system
	
	
	==> coredns [978f68cdd75cb6ba1a4707d81fabaa6706e4b0e8b6fcaace8452d6080183c3ac] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44987 - 15547 "HINFO IN 5229044937764672186.5573208851706192662. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021984028s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-559701
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-559701
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-559701
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:24:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-559701
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:26:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:26:51 +0000   Thu, 20 Nov 2025 22:24:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:26:51 +0000   Thu, 20 Nov 2025 22:24:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:26:51 +0000   Thu, 20 Nov 2025 22:24:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:26:51 +0000   Thu, 20 Nov 2025 22:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-559701
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                e075c442-07ea-4bfb-b4b4-14ea51a97fa9
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-kdh8n                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-default-k8s-diff-port-559701                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-4g2sr                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-559701             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-559701    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-q6lq4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-559701             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-j92j9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9r89r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   Starting                 2m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m41s (x8 over 2m41s)  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m41s (x8 over 2m41s)  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m41s (x8 over 2m41s)  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m29s                  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m29s                  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m29s                  kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m24s                  node-controller  Node default-k8s-diff-port-559701 event: Registered Node default-k8s-diff-port-559701 in Controller
	  Normal   NodeReady                102s                   kubelet          Node default-k8s-diff-port-559701 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-559701 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node default-k8s-diff-port-559701 event: Registered Node default-k8s-diff-port-559701 in Controller
	
	
	==> dmesg <==
	[Nov20 22:02] overlayfs: idmapped layers are currently not supported
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [24e3b3c58fa5dc48ddc4f9d5406e8ee808c9a30a31a0509d6f7eacbc5ebb4a41] <==
	{"level":"warn","ts":"2025-11-20T22:25:59.227931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.250653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.276206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.291218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.316242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.329614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.353509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.374655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.401582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.419573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.471033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.487636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.504739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.526916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.543968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.559378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.575228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.593071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.609756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.628858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.643903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.678174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.698386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.708528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:25:59.775905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54484","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:27:00 up  5:09,  0 user,  load average: 3.91, 3.47, 2.75
	Linux default-k8s-diff-port-559701 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0f799208041082e605140f3d4caab1ef18ec66f7efd50760890b4593e204bb88] <==
	I1120 22:26:01.820660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:26:01.903467       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:26:01.903696       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:26:01.903740       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:26:01.903781       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:26:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:26:02.112671       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:26:02.117790       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:26:02.117821       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:26:02.117960       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:26:32.107949       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 22:26:32.119496       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:26:32.120853       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 22:26:32.142293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1120 22:26:33.818748       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:26:33.818784       1 metrics.go:72] Registering metrics
	I1120 22:26:33.818872       1 controller.go:711] "Syncing nftables rules"
	I1120 22:26:42.106705       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:26:42.106795       1 main.go:301] handling current node
	I1120 22:26:52.107297       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:26:52.107344       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a6629b69c5e0d8e000cdd414ba97d90c5b7a7e59914d41eb655c3968aad1a0c] <==
	I1120 22:26:00.837786       1 aggregator.go:171] initial CRD sync complete...
	I1120 22:26:00.837794       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 22:26:00.837801       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:26:00.837807       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:26:00.877821       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 22:26:00.877852       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 22:26:00.883139       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:26:00.891803       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1120 22:26:00.892234       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 22:26:00.892478       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 22:26:00.897392       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:26:00.902890       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:26:01.084188       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:26:01.170687       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1120 22:26:01.201544       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 22:26:01.604649       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:26:02.199745       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:26:02.393638       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:26:02.479767       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:26:02.506797       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:26:02.619334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.162.93"}
	I1120 22:26:02.638347       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.72.253"}
	I1120 22:26:04.160733       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:26:04.577676       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:26:04.673660       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1d71c5df1fe3fb7bc49ab400af58339d6f0dbb2f7f20480e8fca0999b681c9bb] <==
	I1120 22:26:04.120346       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-559701"
	I1120 22:26:04.120395       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 22:26:04.121081       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 22:26:04.131172       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:26:04.136949       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 22:26:04.136993       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:26:04.143060       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:26:04.146744       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 22:26:04.154119       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 22:26:04.157911       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 22:26:04.158197       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:26:04.172051       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:26:04.172118       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 22:26:04.172601       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 22:26:04.185232       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:26:04.198535       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:26:04.198637       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 22:26:04.198726       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 22:26:04.200238       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 22:26:04.203352       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 22:26:04.210581       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 22:26:04.220036       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:26:04.220066       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:26:04.220072       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:26:04.233589       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5fd128cd31c50bca5a1687270aadf6c6a1bf19093abae39c49f64e02a3647fba] <==
	I1120 22:26:01.509252       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:26:01.822461       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:26:01.926415       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:26:01.943322       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:26:01.986419       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:26:02.165929       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:26:02.165982       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:26:02.171156       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:26:02.181265       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:26:02.182875       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:26:02.184437       1 config.go:200] "Starting service config controller"
	I1120 22:26:02.184449       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:26:02.184469       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:26:02.184473       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:26:02.184484       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:26:02.184488       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:26:02.185128       1 config.go:309] "Starting node config controller"
	I1120 22:26:02.185136       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:26:02.185143       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:26:02.385487       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:26:02.385525       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:26:02.385579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f420a3f656763afb77ad4591b661d794b5ba1e728742d94c9f2a35b5d946b367] <==
	I1120 22:25:58.459045       1 serving.go:386] Generated self-signed cert in-memory
	W1120 22:26:00.762301       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 22:26:00.762348       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 22:26:00.762358       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 22:26:00.762365       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 22:26:00.932300       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:26:00.932332       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:26:00.970034       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:26:00.970177       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:26:00.970200       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:26:00.970218       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 22:26:01.070722       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:26:04 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:04.500955     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8a5c5747-a052-47dd-8fb2-01d08cd64913-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9r89r\" (UID: \"8a5c5747-a052-47dd-8fb2-01d08cd64913\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9r89r"
	Nov 20 22:26:05 default-k8s-diff-port-559701 kubelet[781]: W1120 22:26:05.012990     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/crio-fa521584b2941e0b08e56df4760e1ad83665cf79545c792c2d8e95d4485d6050 WatchSource:0}: Error finding container fa521584b2941e0b08e56df4760e1ad83665cf79545c792c2d8e95d4485d6050: Status 404 returned error can't find the container with id fa521584b2941e0b08e56df4760e1ad83665cf79545c792c2d8e95d4485d6050
	Nov 20 22:26:05 default-k8s-diff-port-559701 kubelet[781]: W1120 22:26:05.036305     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dec634595af070be273337d842f7d675b54e4be9634f4a2c3557821bda49a225/crio-d0482f9d325633287d65e1373292d7211ea2ab0e1d9e7153a81a4abe7a5939be WatchSource:0}: Error finding container d0482f9d325633287d65e1373292d7211ea2ab0e1d9e7153a81a4abe7a5939be: Status 404 returned error can't find the container with id d0482f9d325633287d65e1373292d7211ea2ab0e1d9e7153a81a4abe7a5939be
	Nov 20 22:26:10 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:10.295669     781 scope.go:117] "RemoveContainer" containerID="0e87091e8652d91efe0182b6b23867ffb58d0d1a7af7b653e5c2e470e577d697"
	Nov 20 22:26:10 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:10.751583     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 22:26:11 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:11.300022     781 scope.go:117] "RemoveContainer" containerID="0e87091e8652d91efe0182b6b23867ffb58d0d1a7af7b653e5c2e470e577d697"
	Nov 20 22:26:11 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:11.300290     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:11 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:11.300447     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:12 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:12.303912     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:12 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:12.304123     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:14 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:14.910693     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:14 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:14.910886     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:29.127069     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:29.348985     781 scope.go:117] "RemoveContainer" containerID="7f8a81b7ae14ccae260be72f8df00f55e73e02546a7b80021502ac334d9dcbc7"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:29.349233     781 scope.go:117] "RemoveContainer" containerID="820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:29.349408     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:29 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:29.385483     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9r89r" podStartSLOduration=16.677151139 podStartE2EDuration="25.385465317s" podCreationTimestamp="2025-11-20 22:26:04 +0000 UTC" firstStartedPulling="2025-11-20 22:26:05.043396491 +0000 UTC m=+11.288865134" lastFinishedPulling="2025-11-20 22:26:13.751710669 +0000 UTC m=+19.997179312" observedRunningTime="2025-11-20 22:26:14.331230844 +0000 UTC m=+20.576699503" watchObservedRunningTime="2025-11-20 22:26:29.385465317 +0000 UTC m=+35.630933960"
	Nov 20 22:26:32 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:32.359784     781 scope.go:117] "RemoveContainer" containerID="71ac6e6796c03c7fb8d831ed11b785c9b2c4a26e730aadb906054e37e9d71d56"
	Nov 20 22:26:34 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:34.910188     781 scope.go:117] "RemoveContainer" containerID="820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	Nov 20 22:26:34 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:34.910374     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:47 default-k8s-diff-port-559701 kubelet[781]: I1120 22:26:47.126357     781 scope.go:117] "RemoveContainer" containerID="820ec548d452c0a792ac16a89bac20c757c3a06cb1caf91ec56781cfd73dc6ad"
	Nov 20 22:26:47 default-k8s-diff-port-559701 kubelet[781]: E1120 22:26:47.126725     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j92j9_kubernetes-dashboard(d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j92j9" podUID="d6bec3ee-82d0-4f43-aa02-e1d3dbd5e326"
	Nov 20 22:26:54 default-k8s-diff-port-559701 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:26:55 default-k8s-diff-port-559701 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:26:55 default-k8s-diff-port-559701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f46a136c47f729995c7015f57754a197f8024a568665f2ed05d801a225a32dcb] <==
	2025/11/20 22:26:13 Starting overwatch
	2025/11/20 22:26:13 Using namespace: kubernetes-dashboard
	2025/11/20 22:26:13 Using in-cluster config to connect to apiserver
	2025/11/20 22:26:13 Using secret token for csrf signing
	2025/11/20 22:26:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 22:26:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 22:26:13 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 22:26:13 Generating JWE encryption key
	2025/11/20 22:26:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 22:26:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 22:26:14 Initializing JWE encryption key from synchronized object
	2025/11/20 22:26:14 Creating in-cluster Sidecar client
	2025/11/20 22:26:14 Serving insecurely on HTTP port: 9090
	2025/11/20 22:26:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:26:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [71ac6e6796c03c7fb8d831ed11b785c9b2c4a26e730aadb906054e37e9d71d56] <==
	I1120 22:26:01.861592       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 22:26:31.875216       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c4a140840e88451bcb7186b191e974b1f47a8940a55b1dcff5335b67d20cf73c] <==
	I1120 22:26:32.489184       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:26:32.495999       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 22:26:32.501656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:35.975548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:40.236229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:43.838355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:46.891393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:49.913821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:49.921300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:26:49.921468       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:26:49.921551       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69a307c5-854a-4ffe-8ac7-a9f82ffd8d45", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-559701_e94f15ff-fbc7-4f06-9a7b-3e31cb9dbf3d became leader
	I1120 22:26:49.921627       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-559701_e94f15ff-fbc7-4f06-9a7b-3e31cb9dbf3d!
	W1120 22:26:49.926577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:49.930326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:26:50.021819       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-559701_e94f15ff-fbc7-4f06-9a7b-3e31cb9dbf3d!
	W1120 22:26:51.934567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:51.942109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:53.945893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:53.953556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:55.956642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:55.961051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:57.964373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:57.971867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:59.974641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:26:59.981559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701: exit status 2 (373.366769ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-559701 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-270206 --alsologtostderr -v=1
E1120 22:27:21.512948  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:27:21.674543  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:27:21.996125  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:27:22.638123  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-270206 --alsologtostderr -v=1: exit status 80 (2.164164934s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-270206 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 22:27:21.563490 1040571 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:27:21.563670 1040571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:27:21.563691 1040571 out.go:374] Setting ErrFile to fd 2...
	I1120 22:27:21.563697 1040571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:27:21.563994 1040571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:27:21.564291 1040571 out.go:368] Setting JSON to false
	I1120 22:27:21.564322 1040571 mustload.go:66] Loading cluster: embed-certs-270206
	I1120 22:27:21.564768 1040571 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:27:21.565303 1040571 cli_runner.go:164] Run: docker container inspect embed-certs-270206 --format={{.State.Status}}
	I1120 22:27:21.588844 1040571 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:27:21.589277 1040571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:27:21.687221 1040571 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:78 SystemTime:2025-11-20 22:27:21.677768398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:27:21.687861 1040571 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-270206 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 22:27:21.692909 1040571 out.go:179] * Pausing node embed-certs-270206 ... 
	I1120 22:27:21.697636 1040571 host.go:66] Checking if "embed-certs-270206" exists ...
	I1120 22:27:21.697992 1040571 ssh_runner.go:195] Run: systemctl --version
	I1120 22:27:21.698032 1040571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-270206
	I1120 22:27:21.719923 1040571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/embed-certs-270206/id_rsa Username:docker}
	I1120 22:27:21.829886 1040571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:27:21.843686 1040571 pause.go:52] kubelet running: true
	I1120 22:27:21.843755 1040571 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:27:22.159639 1040571 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:27:22.159723 1040571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:27:22.243074 1040571 cri.go:89] found id: "644045e039c8edbef28f05e2081e02e88e7668ac9e011777e75d8215f8ad38fa"
	I1120 22:27:22.243098 1040571 cri.go:89] found id: "e71b73690cd58a0e2ae007ea5eee09f437d2a6e6614e83f0ae2f01702549a622"
	I1120 22:27:22.243104 1040571 cri.go:89] found id: "afa79562f9c94f1b51124ed05b060d5c7eaec4ead64b1bbcceb4670611f5c443"
	I1120 22:27:22.243108 1040571 cri.go:89] found id: "345088aa6124d02a9931e7016c0a1f09f4824adfef2e5d2fd4e64bda6a242344"
	I1120 22:27:22.243111 1040571 cri.go:89] found id: "8c6434945bfead8d9b74fa7b85cd734ff1ff9683d7020d6b958ee4c50150bcba"
	I1120 22:27:22.243116 1040571 cri.go:89] found id: "3b1fee8d5af72e2b534ec4e7ad37bec76a977b37fb8d8cd98bdabfae224ac824"
	I1120 22:27:22.243119 1040571 cri.go:89] found id: "0e18c657e0d1a0e87220cc83c18f4b5c5413a4677fa9b2ca5752a5267bead913"
	I1120 22:27:22.243122 1040571 cri.go:89] found id: "ea0c8d065057f3665d6ec3035564aee5d8e6850f708052453e6159677f28f712"
	I1120 22:27:22.243125 1040571 cri.go:89] found id: "a5edded9820b755f34e9b6d2593a3430839d72f1039a85a103ebda708afb8677"
	I1120 22:27:22.243133 1040571 cri.go:89] found id: "cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab"
	I1120 22:27:22.243136 1040571 cri.go:89] found id: "bf57dfc57e7549a44597fd1849b61c6c486546fa7ec4348a7ed3ac28731fa817"
	I1120 22:27:22.243140 1040571 cri.go:89] found id: ""
	I1120 22:27:22.243191 1040571 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:27:22.257694 1040571 retry.go:31] will retry after 300.721759ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:27:22Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:27:22.559253 1040571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:27:22.577788 1040571 pause.go:52] kubelet running: false
	I1120 22:27:22.577864 1040571 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:27:22.806432 1040571 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:27:22.806521 1040571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:27:22.925613 1040571 cri.go:89] found id: "644045e039c8edbef28f05e2081e02e88e7668ac9e011777e75d8215f8ad38fa"
	I1120 22:27:22.925650 1040571 cri.go:89] found id: "e71b73690cd58a0e2ae007ea5eee09f437d2a6e6614e83f0ae2f01702549a622"
	I1120 22:27:22.925656 1040571 cri.go:89] found id: "afa79562f9c94f1b51124ed05b060d5c7eaec4ead64b1bbcceb4670611f5c443"
	I1120 22:27:22.925659 1040571 cri.go:89] found id: "345088aa6124d02a9931e7016c0a1f09f4824adfef2e5d2fd4e64bda6a242344"
	I1120 22:27:22.925663 1040571 cri.go:89] found id: "8c6434945bfead8d9b74fa7b85cd734ff1ff9683d7020d6b958ee4c50150bcba"
	I1120 22:27:22.925666 1040571 cri.go:89] found id: "3b1fee8d5af72e2b534ec4e7ad37bec76a977b37fb8d8cd98bdabfae224ac824"
	I1120 22:27:22.925669 1040571 cri.go:89] found id: "0e18c657e0d1a0e87220cc83c18f4b5c5413a4677fa9b2ca5752a5267bead913"
	I1120 22:27:22.925672 1040571 cri.go:89] found id: "ea0c8d065057f3665d6ec3035564aee5d8e6850f708052453e6159677f28f712"
	I1120 22:27:22.925675 1040571 cri.go:89] found id: "a5edded9820b755f34e9b6d2593a3430839d72f1039a85a103ebda708afb8677"
	I1120 22:27:22.925682 1040571 cri.go:89] found id: "cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab"
	I1120 22:27:22.925686 1040571 cri.go:89] found id: "bf57dfc57e7549a44597fd1849b61c6c486546fa7ec4348a7ed3ac28731fa817"
	I1120 22:27:22.925707 1040571 cri.go:89] found id: ""
	I1120 22:27:22.925760 1040571 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:27:22.940705 1040571 retry.go:31] will retry after 294.016944ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:27:22Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:27:23.235018 1040571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:27:23.249719 1040571 pause.go:52] kubelet running: false
	I1120 22:27:23.249803 1040571 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:27:23.491140 1040571 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:27:23.491240 1040571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:27:23.586724 1040571 cri.go:89] found id: "644045e039c8edbef28f05e2081e02e88e7668ac9e011777e75d8215f8ad38fa"
	I1120 22:27:23.586751 1040571 cri.go:89] found id: "e71b73690cd58a0e2ae007ea5eee09f437d2a6e6614e83f0ae2f01702549a622"
	I1120 22:27:23.586757 1040571 cri.go:89] found id: "afa79562f9c94f1b51124ed05b060d5c7eaec4ead64b1bbcceb4670611f5c443"
	I1120 22:27:23.586761 1040571 cri.go:89] found id: "345088aa6124d02a9931e7016c0a1f09f4824adfef2e5d2fd4e64bda6a242344"
	I1120 22:27:23.586764 1040571 cri.go:89] found id: "8c6434945bfead8d9b74fa7b85cd734ff1ff9683d7020d6b958ee4c50150bcba"
	I1120 22:27:23.586769 1040571 cri.go:89] found id: "3b1fee8d5af72e2b534ec4e7ad37bec76a977b37fb8d8cd98bdabfae224ac824"
	I1120 22:27:23.586772 1040571 cri.go:89] found id: "0e18c657e0d1a0e87220cc83c18f4b5c5413a4677fa9b2ca5752a5267bead913"
	I1120 22:27:23.586783 1040571 cri.go:89] found id: "ea0c8d065057f3665d6ec3035564aee5d8e6850f708052453e6159677f28f712"
	I1120 22:27:23.586788 1040571 cri.go:89] found id: "a5edded9820b755f34e9b6d2593a3430839d72f1039a85a103ebda708afb8677"
	I1120 22:27:23.586794 1040571 cri.go:89] found id: "cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab"
	I1120 22:27:23.586800 1040571 cri.go:89] found id: "bf57dfc57e7549a44597fd1849b61c6c486546fa7ec4348a7ed3ac28731fa817"
	I1120 22:27:23.586803 1040571 cri.go:89] found id: ""
	I1120 22:27:23.586869 1040571 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:27:23.604766 1040571 out.go:203] 
	W1120 22:27:23.607920 1040571 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:27:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:27:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 22:27:23.607952 1040571 out.go:285] * 
	* 
	W1120 22:27:23.617909 1040571 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 22:27:23.620918 1040571 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-270206 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-270206
helpers_test.go:243: (dbg) docker inspect embed-certs-270206:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd",
	        "Created": "2025-11-20T22:24:33.33301512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1034786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:26:18.688600985Z",
	            "FinishedAt": "2025-11-20T22:26:17.274402434Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/hosts",
	        "LogPath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd-json.log",
	        "Name": "/embed-certs-270206",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-270206:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-270206",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd",
	                "LowerDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-270206",
	                "Source": "/var/lib/docker/volumes/embed-certs-270206/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-270206",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-270206",
	                "name.minikube.sigs.k8s.io": "embed-certs-270206",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfa5f0301d4a5415935f6b792940a8c21a62ea07e578b8e3707c6127632bd68a",
	            "SandboxKey": "/var/run/docker/netns/dfa5f0301d4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34186"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34184"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34185"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-270206": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:89:03:33:43:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ffd59b794c532505e054cacac90fc1087646ff0df0b0ac27f388edeea26b442",
	                    "EndpointID": "e2ec7b597ebf174fd24ee8b35cefc6ebc009614823c1de44c0a26bed30bbb405",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-270206",
	                        "155df8ef967b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270206 -n embed-certs-270206
E1120 22:27:23.920048  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270206 -n embed-certs-270206: exit status 2 (454.478014ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-270206 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-270206 logs -n 25: (1.887076457s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:23 UTC │
	│ image   │ old-k8s-version-443192 image list --format=json                                                                                                                                                                                               │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ pause   │ -p old-k8s-version-443192 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │                     │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:24 UTC │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:25 UTC │
	│ delete  │ -p cert-expiration-420078                                                                                                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:24 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-559701 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ stop    │ -p embed-certs-270206 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-270206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:27 UTC │
	│ image   │ default-k8s-diff-port-559701 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p disable-driver-mounts-305138                                                                                                                                                                                                               │ disable-driver-mounts-305138 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	│ image   │ embed-certs-270206 image list --format=json                                                                                                                                                                                                   │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ pause   │ -p embed-certs-270206 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:27:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:27:05.087368 1038356 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:27:05.087545 1038356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:27:05.087568 1038356 out.go:374] Setting ErrFile to fd 2...
	I1120 22:27:05.087586 1038356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:27:05.087966 1038356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:27:05.088450 1038356 out.go:368] Setting JSON to false
	I1120 22:27:05.089479 1038356 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18550,"bootTime":1763659075,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:27:05.089579 1038356 start.go:143] virtualization:  
	I1120 22:27:05.091027 1038356 out.go:179] * [no-preload-041029] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:27:05.092086 1038356 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:27:05.093190 1038356 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:27:05.094230 1038356 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:27:05.095246 1038356 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:27:05.096351 1038356 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:27:05.096506 1038356 notify.go:221] Checking for updates...
	I1120 22:27:05.099461 1038356 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:27:05.101396 1038356 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:27:05.101603 1038356 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:27:05.124961 1038356 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:27:05.125097 1038356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:27:05.199833 1038356 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:27:05.189377596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:27:05.199940 1038356 docker.go:319] overlay module found
	I1120 22:27:05.201128 1038356 out.go:179] * Using the docker driver based on user configuration
	I1120 22:27:05.202103 1038356 start.go:309] selected driver: docker
	I1120 22:27:05.202117 1038356 start.go:930] validating driver "docker" against <nil>
	I1120 22:27:05.202130 1038356 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:27:05.202836 1038356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:27:05.261731 1038356 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:27:05.252347562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:27:05.261901 1038356 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 22:27:05.262134 1038356 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:27:05.263318 1038356 out.go:179] * Using Docker driver with root privileges
	I1120 22:27:05.264352 1038356 cni.go:84] Creating CNI manager for ""
	I1120 22:27:05.264418 1038356 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:27:05.264433 1038356 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 22:27:05.264512 1038356 start.go:353] cluster config:
	{Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:27:05.265853 1038356 out.go:179] * Starting "no-preload-041029" primary control-plane node in "no-preload-041029" cluster
	I1120 22:27:05.266826 1038356 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:27:05.268051 1038356 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:27:05.269086 1038356 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:27:05.269156 1038356 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:27:05.269216 1038356 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json ...
	I1120 22:27:05.269246 1038356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json: {Name:mkd1b9589e6da64d2e37f22e104fdda2b4bf8f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:05.272331 1038356 cache.go:107] acquiring lock: {Name:mkc179cc367be18f686b3ff0d25d7c0a4d38107a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.272537 1038356 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:05.272775 1038356 cache.go:107] acquiring lock: {Name:mk5ddbac06bb4c58e0829e32dc3cac2e0f3d3484 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.272988 1038356 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:05.273271 1038356 cache.go:107] acquiring lock: {Name:mk6473ff5661413ee7b260344002f555ac817d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.273384 1038356 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:05.273657 1038356 cache.go:107] acquiring lock: {Name:mk452c1826f4ea2a7476e6cd709c98ef1de14eae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.273748 1038356 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:05.273983 1038356 cache.go:107] acquiring lock: {Name:mk1e9e4e31f0a8424c64380df7184f5c5bff61db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.274062 1038356 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:05.274166 1038356 cache.go:107] acquiring lock: {Name:mk2d31e05763b1401b87a3347e71140539ad5cd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.274229 1038356 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 22:27:05.275004 1038356 cache.go:107] acquiring lock: {Name:mkfe8a3234fd2567b981ed2e943c252800f37788 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.275104 1038356 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1120 22:27:05.275115 1038356 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 5.636906ms
	I1120 22:27:05.275122 1038356 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1120 22:27:05.275137 1038356 cache.go:107] acquiring lock: {Name:mk7bd038abefa117c730983c9f9ea84fc4100cef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.275232 1038356 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:05.276380 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:05.276474 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:05.277155 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:05.277339 1038356 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:05.278469 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:05.278712 1038356 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 22:27:05.278859 1038356 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:05.296837 1038356 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:27:05.296863 1038356 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:27:05.296876 1038356 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:27:05.296902 1038356 start.go:360] acquireMachinesLock for no-preload-041029: {Name:mk272b44e31f3ea0985bee4020b0ba7b3af4d70d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.297012 1038356 start.go:364] duration metric: took 93.367µs to acquireMachinesLock for "no-preload-041029"
	I1120 22:27:05.297038 1038356 start.go:93] Provisioning new machine with config: &{Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:27:05.297103 1038356 start.go:125] createHost starting for "" (driver="docker")
	W1120 22:27:04.912269 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	I1120 22:27:06.916846 1034660 pod_ready.go:94] pod "coredns-66bc5c9577-c5cg5" is "Ready"
	I1120 22:27:06.916881 1034660 pod_ready.go:86] duration metric: took 33.512227193s for pod "coredns-66bc5c9577-c5cg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.927126 1034660 pod_ready.go:83] waiting for pod "etcd-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.948454 1034660 pod_ready.go:94] pod "etcd-embed-certs-270206" is "Ready"
	I1120 22:27:06.948481 1034660 pod_ready.go:86] duration metric: took 21.332246ms for pod "etcd-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.967431 1034660 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.993132 1034660 pod_ready.go:94] pod "kube-apiserver-embed-certs-270206" is "Ready"
	I1120 22:27:06.993155 1034660 pod_ready.go:86] duration metric: took 25.700449ms for pod "kube-apiserver-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.998592 1034660 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:07.114757 1034660 pod_ready.go:94] pod "kube-controller-manager-embed-certs-270206" is "Ready"
	I1120 22:27:07.114780 1034660 pod_ready.go:86] duration metric: took 116.166794ms for pod "kube-controller-manager-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:07.316226 1034660 pod_ready.go:83] waiting for pod "kube-proxy-9d84b" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:07.708692 1034660 pod_ready.go:94] pod "kube-proxy-9d84b" is "Ready"
	I1120 22:27:07.708714 1034660 pod_ready.go:86] duration metric: took 392.467084ms for pod "kube-proxy-9d84b" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:07.908924 1034660 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:08.310054 1034660 pod_ready.go:94] pod "kube-scheduler-embed-certs-270206" is "Ready"
	I1120 22:27:08.310087 1034660 pod_ready.go:86] duration metric: took 401.119027ms for pod "kube-scheduler-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:08.310100 1034660 pod_ready.go:40] duration metric: took 34.910385639s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:27:08.402732 1034660 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:27:08.407895 1034660 out.go:179] * Done! kubectl is now configured to use "embed-certs-270206" cluster and "default" namespace by default
	I1120 22:27:05.298830 1038356 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 22:27:05.299099 1038356 start.go:159] libmachine.API.Create for "no-preload-041029" (driver="docker")
	I1120 22:27:05.299145 1038356 client.go:173] LocalClient.Create starting
	I1120 22:27:05.299211 1038356 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem
	I1120 22:27:05.299243 1038356 main.go:143] libmachine: Decoding PEM data...
	I1120 22:27:05.299262 1038356 main.go:143] libmachine: Parsing certificate...
	I1120 22:27:05.299325 1038356 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem
	I1120 22:27:05.299348 1038356 main.go:143] libmachine: Decoding PEM data...
	I1120 22:27:05.299369 1038356 main.go:143] libmachine: Parsing certificate...
	I1120 22:27:05.299726 1038356 cli_runner.go:164] Run: docker network inspect no-preload-041029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 22:27:05.326197 1038356 cli_runner.go:211] docker network inspect no-preload-041029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 22:27:05.326304 1038356 network_create.go:284] running [docker network inspect no-preload-041029] to gather additional debugging logs...
	I1120 22:27:05.326337 1038356 cli_runner.go:164] Run: docker network inspect no-preload-041029
	W1120 22:27:05.341001 1038356 cli_runner.go:211] docker network inspect no-preload-041029 returned with exit code 1
	I1120 22:27:05.341036 1038356 network_create.go:287] error running [docker network inspect no-preload-041029]: docker network inspect no-preload-041029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-041029 not found
	I1120 22:27:05.341071 1038356 network_create.go:289] output of [docker network inspect no-preload-041029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-041029 not found
	
	** /stderr **
	I1120 22:27:05.341185 1038356 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:27:05.360359 1038356 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad232b357b1b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:e5:2b:94:2e:bb} reservation:<nil>}
	I1120 22:27:05.360752 1038356 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6d47b47b5eb7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:61:6b:56:c9:db} reservation:<nil>}
	I1120 22:27:05.361129 1038356 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8999df1e8509 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:04:87:b7:55:e1} reservation:<nil>}
	I1120 22:27:05.361463 1038356 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ffd59b794c5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:58:8a:b8:8c:c5} reservation:<nil>}
	I1120 22:27:05.361998 1038356 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c1e240}
	I1120 22:27:05.362025 1038356 network_create.go:124] attempt to create docker network no-preload-041029 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1120 22:27:05.362090 1038356 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-041029 no-preload-041029
	I1120 22:27:05.441331 1038356 network_create.go:108] docker network no-preload-041029 192.168.85.0/24 created
	I1120 22:27:05.441385 1038356 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-041029" container
	I1120 22:27:05.441461 1038356 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 22:27:05.459550 1038356 cli_runner.go:164] Run: docker volume create no-preload-041029 --label name.minikube.sigs.k8s.io=no-preload-041029 --label created_by.minikube.sigs.k8s.io=true
	I1120 22:27:05.477400 1038356 oci.go:103] Successfully created a docker volume no-preload-041029
	I1120 22:27:05.477494 1038356 cli_runner.go:164] Run: docker run --rm --name no-preload-041029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-041029 --entrypoint /usr/bin/test -v no-preload-041029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 22:27:05.739995 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 22:27:05.750505 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 22:27:05.760166 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 22:27:05.764299 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 22:27:05.803894 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1120 22:27:05.824064 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1120 22:27:05.871696 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1120 22:27:05.871725 1038356 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 597.559436ms
	I1120 22:27:05.871740 1038356 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1120 22:27:05.920522 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 22:27:06.123869 1038356 oci.go:107] Successfully prepared a docker volume no-preload-041029
	I1120 22:27:06.123918 1038356 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1120 22:27:06.124073 1038356 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 22:27:06.124252 1038356 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 22:27:06.198471 1038356 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-041029 --name no-preload-041029 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-041029 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-041029 --network no-preload-041029 --ip 192.168.85.2 --volume no-preload-041029:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 22:27:06.274474 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1120 22:27:06.274500 1038356 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 1.004679139s
	I1120 22:27:06.274515 1038356 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1120 22:27:06.662718 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Running}}
	I1120 22:27:06.745639 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:27:06.840940 1038356 cli_runner.go:164] Run: docker exec no-preload-041029 stat /var/lib/dpkg/alternatives/iptables
	I1120 22:27:06.925543 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1120 22:27:06.932315 1038356 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.65903659s
	I1120 22:27:06.932384 1038356 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1120 22:27:06.960024 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1120 22:27:06.960055 1038356 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.684918046s
	I1120 22:27:06.960070 1038356 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1120 22:27:07.017584 1038356 oci.go:144] the created container "no-preload-041029" has a running status.
	I1120 22:27:07.017611 1038356 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa...
	I1120 22:27:07.051381 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1120 22:27:07.051462 1038356 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.777807379s
	I1120 22:27:07.051494 1038356 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1120 22:27:07.139612 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1120 22:27:07.139666 1038356 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.866872541s
	I1120 22:27:07.139682 1038356 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1120 22:27:07.572834 1038356 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 22:27:07.593776 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:27:07.613275 1038356 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 22:27:07.613299 1038356 kic_runner.go:114] Args: [docker exec --privileged no-preload-041029 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 22:27:07.663817 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:27:07.682274 1038356 machine.go:94] provisionDockerMachine start ...
	I1120 22:27:07.682370 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:07.704361 1038356 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:07.704700 1038356 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1120 22:27:07.704711 1038356 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:27:07.705669 1038356 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:27:08.249914 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1120 22:27:08.249945 1038356 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.975963796s
	I1120 22:27:08.249958 1038356 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1120 22:27:08.249978 1038356 cache.go:87] Successfully saved all images to host disk.
	I1120 22:27:10.846697 1038356 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-041029
	
	I1120 22:27:10.846723 1038356 ubuntu.go:182] provisioning hostname "no-preload-041029"
	I1120 22:27:10.846804 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:10.865636 1038356 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:10.865968 1038356 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1120 22:27:10.865985 1038356 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-041029 && echo "no-preload-041029" | sudo tee /etc/hostname
	I1120 22:27:11.021041 1038356 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-041029
	
	I1120 22:27:11.021146 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:11.040377 1038356 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:11.040695 1038356 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1120 22:27:11.040718 1038356 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-041029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-041029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-041029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:27:11.191443 1038356 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:27:11.191477 1038356 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:27:11.191509 1038356 ubuntu.go:190] setting up certificates
	I1120 22:27:11.191524 1038356 provision.go:84] configureAuth start
	I1120 22:27:11.191612 1038356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:27:11.215419 1038356 provision.go:143] copyHostCerts
	I1120 22:27:11.215501 1038356 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:27:11.215516 1038356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:27:11.215595 1038356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:27:11.215696 1038356 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:27:11.215706 1038356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:27:11.215734 1038356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:27:11.215793 1038356 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:27:11.215803 1038356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:27:11.215833 1038356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:27:11.215913 1038356 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.no-preload-041029 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-041029]
	I1120 22:27:11.598437 1038356 provision.go:177] copyRemoteCerts
	I1120 22:27:11.598513 1038356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:27:11.598558 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:11.619210 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:11.722780 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:27:11.742136 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 22:27:11.760829 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:27:11.777944 1038356 provision.go:87] duration metric: took 586.396207ms to configureAuth
	I1120 22:27:11.778012 1038356 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:27:11.778207 1038356 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:27:11.778326 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:11.795127 1038356 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:11.795465 1038356 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1120 22:27:11.795488 1038356 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:27:12.186262 1038356 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:27:12.186328 1038356 machine.go:97] duration metric: took 4.504031717s to provisionDockerMachine
	I1120 22:27:12.186356 1038356 client.go:176] duration metric: took 6.88720335s to LocalClient.Create
	I1120 22:27:12.186398 1038356 start.go:167] duration metric: took 6.887299762s to libmachine.API.Create "no-preload-041029"
	I1120 22:27:12.186427 1038356 start.go:293] postStartSetup for "no-preload-041029" (driver="docker")
	I1120 22:27:12.186467 1038356 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:27:12.186559 1038356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:27:12.186616 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:12.206898 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:12.307458 1038356 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:27:12.310764 1038356 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:27:12.310795 1038356 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:27:12.310807 1038356 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:27:12.310880 1038356 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:27:12.310962 1038356 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:27:12.311101 1038356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:27:12.318767 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:27:12.338887 1038356 start.go:296] duration metric: took 152.415415ms for postStartSetup
	I1120 22:27:12.339323 1038356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:27:12.356573 1038356 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json ...
	I1120 22:27:12.356855 1038356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:27:12.356908 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:12.374298 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:12.471917 1038356 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:27:12.480443 1038356 start.go:128] duration metric: took 7.183324425s to createHost
	I1120 22:27:12.480470 1038356 start.go:83] releasing machines lock for "no-preload-041029", held for 7.183449956s
	I1120 22:27:12.480546 1038356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:27:12.498616 1038356 ssh_runner.go:195] Run: cat /version.json
	I1120 22:27:12.498706 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:12.499072 1038356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:27:12.499143 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:12.521390 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:12.546276 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:12.631084 1038356 ssh_runner.go:195] Run: systemctl --version
	I1120 22:27:12.740768 1038356 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:27:12.779874 1038356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:27:12.784823 1038356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:27:12.784899 1038356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:27:12.821206 1038356 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 22:27:12.821286 1038356 start.go:496] detecting cgroup driver to use...
	I1120 22:27:12.821350 1038356 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:27:12.821439 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:27:12.841892 1038356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:27:12.854802 1038356 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:27:12.854959 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:27:12.873831 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:27:12.893276 1038356 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:27:13.021890 1038356 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:27:13.164628 1038356 docker.go:234] disabling docker service ...
	I1120 22:27:13.164774 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:27:13.188650 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:27:13.202328 1038356 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:27:13.321507 1038356 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:27:13.450921 1038356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:27:13.465329 1038356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:27:13.481208 1038356 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:27:13.481330 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.490246 1038356 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:27:13.490362 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.499552 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.508453 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.517793 1038356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:27:13.526604 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.535559 1038356 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.549911 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.564838 1038356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:27:13.574295 1038356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:27:13.582363 1038356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:27:13.721674 1038356 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:27:13.906314 1038356 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:27:13.906443 1038356 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:27:13.910513 1038356 start.go:564] Will wait 60s for crictl version
	I1120 22:27:13.910583 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:13.914515 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:27:13.942015 1038356 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:27:13.942119 1038356 ssh_runner.go:195] Run: crio --version
	I1120 22:27:13.975100 1038356 ssh_runner.go:195] Run: crio --version
	I1120 22:27:14.008047 1038356 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:27:14.011101 1038356 cli_runner.go:164] Run: docker network inspect no-preload-041029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:27:14.028870 1038356 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:27:14.033306 1038356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:27:14.043775 1038356 kubeadm.go:884] updating cluster {Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:27:14.043899 1038356 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:27:14.043957 1038356 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:27:14.071070 1038356 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 22:27:14.071098 1038356 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1120 22:27:14.071160 1038356 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:14.071396 1038356 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.071498 1038356 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.071594 1038356 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.071683 1038356 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.071771 1038356 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 22:27:14.071867 1038356 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.071959 1038356 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.072943 1038356 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 22:27:14.073175 1038356 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.073293 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.073413 1038356 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:14.073785 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.073893 1038356 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.073981 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.074146 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.332761 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1120 22:27:14.333236 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.333443 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.349275 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.349415 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.387736 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.396753 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.482904 1038356 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1120 22:27:14.482947 1038356 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1120 22:27:14.483177 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.483063 1038356 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1120 22:27:14.483272 1038356 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.483305 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.483093 1038356 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1120 22:27:14.483341 1038356 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.483362 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.483486 1038356 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1120 22:27:14.483520 1038356 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.483719 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.507201 1038356 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1120 22:27:14.507238 1038356 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.507289 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.507392 1038356 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1120 22:27:14.507409 1038356 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.507433 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.522773 1038356 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1120 22:27:14.523086 1038356 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.522868 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.522893 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 22:27:14.522936 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.522970 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.523011 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.523038 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.523273 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.613216 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.618902 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.619065 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.619075 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.619171 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.619229 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 22:27:14.627194 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.719310 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.729329 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.729431 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.729519 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 22:27:14.729609 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.732496 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.732605 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.788986 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.822801 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 22:27:14.822897 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 22:27:14.823000 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1120 22:27:14.823084 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 22:27:14.823086 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 22:27:14.823131 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1120 22:27:14.823155 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 22:27:14.823174 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1120 22:27:14.835528 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 22:27:14.835692 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 22:27:14.863799 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 22:27:14.863968 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 22:27:14.864075 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1120 22:27:14.864159 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1120 22:27:14.864303 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1120 22:27:14.864487 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1120 22:27:14.864386 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1120 22:27:14.864547 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1120 22:27:14.864415 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1120 22:27:14.864597 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1120 22:27:14.864438 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1120 22:27:14.864624 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1120 22:27:14.864457 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1120 22:27:14.864649 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1120 22:27:14.920308 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1120 22:27:14.920348 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1120 22:27:14.920394 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1120 22:27:14.920494 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1120 22:27:14.961042 1038356 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1120 22:27:14.961397 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1120 22:27:15.419113 1038356 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1120 22:27:15.419151 1038356 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 22:27:15.419229 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1120 22:27:15.463951 1038356 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1120 22:27:15.464196 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:17.419243 1038356 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.999985037s)
	I1120 22:27:17.419341 1038356 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1120 22:27:17.419276 1038356 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.955031285s)
	I1120 22:27:17.419463 1038356 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1120 22:27:17.419400 1038356 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1120 22:27:17.419515 1038356 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:17.419612 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:17.419617 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1120 22:27:17.424825 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:19.103376 1038356 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.678474934s)
	I1120 22:27:19.103366 1038356 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.683636214s)
	I1120 22:27:19.103423 1038356 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1120 22:27:19.103449 1038356 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 22:27:19.103479 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:19.103507 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	
	
	==> CRI-O <==
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.715504745Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.718769419Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.71881129Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.718838999Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.721815235Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.721848006Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.721864778Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.724936079Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.724970229Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.724986762Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.728646879Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.728681596Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.558059868Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0fbad06d-2aab-436d-a14a-ff929b1ec827 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.559427787Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a3841d0d-734e-461d-b1ef-102fdd58e10f name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.560573968Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9/dashboard-metrics-scraper" id=11e0202d-97be-4e57-b496-b0981d2db0ac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.560671594Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.56842746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.570325264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.608487535Z" level=info msg="Created container cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9/dashboard-metrics-scraper" id=11e0202d-97be-4e57-b496-b0981d2db0ac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.613813077Z" level=info msg="Starting container: cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab" id=47f54bd4-fd25-429c-8f69-246ff39f35f2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.617325655Z" level=info msg="Started container" PID=1737 containerID=cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9/dashboard-metrics-scraper id=47f54bd4-fd25-429c-8f69-246ff39f35f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7cd8c4080f003c8815687e4aeeffce799f2fe9cc2cd6d5bbae2f4eac586e90ea
	Nov 20 22:27:20 embed-certs-270206 conmon[1735]: conmon cea1f61272e3fca822e4 <ninfo>: container 1737 exited with status 1
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.876180314Z" level=info msg="Removing container: 41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f" id=587cec1e-30af-4013-8310-67e2c79f63f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.886663731Z" level=info msg="Error loading conmon cgroup of container 41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f: cgroup deleted" id=587cec1e-30af-4013-8310-67e2c79f63f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.90164771Z" level=info msg="Removed container 41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9/dashboard-metrics-scraper" id=587cec1e-30af-4013-8310-67e2c79f63f8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cea1f61272e3f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   7cd8c4080f003       dashboard-metrics-scraper-6ffb444bf9-7kbn9   kubernetes-dashboard
	644045e039c8e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   38fda02f4f1b4       storage-provisioner                          kube-system
	bf57dfc57e754       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   9cd2eaf0fc4fa       kubernetes-dashboard-855c9754f9-8zhp9        kubernetes-dashboard
	e71b73690cd58       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   4bb6f548a779d       coredns-66bc5c9577-c5cg5                     kube-system
	d3a4faf36bc29       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   313ffda348a94       busybox                                      default
	afa79562f9c94       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   38fda02f4f1b4       storage-provisioner                          kube-system
	345088aa6124d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   b470247ad86d4       kindnet-9sqjv                                kube-system
	8c6434945bfea       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago      Running             kube-proxy                  1                   a820db1c86e1c       kube-proxy-9d84b                             kube-system
	3b1fee8d5af72       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   534934c41d64a       kube-scheduler-embed-certs-270206            kube-system
	0e18c657e0d1a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   2fccc67f292c0       kube-controller-manager-embed-certs-270206   kube-system
	ea0c8d065057f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   84e8908bbf69a       etcd-embed-certs-270206                      kube-system
	a5edded9820b7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   f6e10945c0b6f       kube-apiserver-embed-certs-270206            kube-system
	
	
	==> coredns [e71b73690cd58a0e2ae007ea5eee09f437d2a6e6614e83f0ae2f01702549a622] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57018 - 61497 "HINFO IN 4775963711003691506.2448520270965228608. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025161677s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-270206
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-270206
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-270206
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_25_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:24:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-270206
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:27:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:27:02 +0000   Thu, 20 Nov 2025 22:24:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:27:02 +0000   Thu, 20 Nov 2025 22:24:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:27:02 +0000   Thu, 20 Nov 2025 22:24:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:27:02 +0000   Thu, 20 Nov 2025 22:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-270206
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                484a9a63-7f62-411b-a1d5-b7485838eb61
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-c5cg5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 etcd-embed-certs-270206                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-9sqjv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-embed-certs-270206             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-270206    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-9d84b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-embed-certs-270206             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7kbn9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8zhp9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m15s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m31s)  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m31s)  kubelet          Node embed-certs-270206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m31s)  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node embed-certs-270206 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m18s                  node-controller  Node embed-certs-270206 event: Registered Node embed-certs-270206 in Controller
	  Normal   NodeReady                95s                    kubelet          Node embed-certs-270206 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node embed-certs-270206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node embed-certs-270206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node embed-certs-270206 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node embed-certs-270206 event: Registered Node embed-certs-270206 in Controller
	
	
	==> dmesg <==
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	[Nov20 22:27] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ea0c8d065057f3665d6ec3035564aee5d8e6850f708052453e6159677f28f712] <==
	{"level":"warn","ts":"2025-11-20T22:26:29.480368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.488598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.527947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.551991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.558030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.586798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.600557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.623332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.637202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.706407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.713374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.721873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.739953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.750261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.767516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.798831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.807675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.828408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.868487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.887283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.920232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.944918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.962143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.981335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:30.046392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37130","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:27:25 up  5:09,  0 user,  load average: 3.23, 3.33, 2.72
	Linux embed-certs-270206 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [345088aa6124d02a9931e7016c0a1f09f4824adfef2e5d2fd4e64bda6a242344] <==
	I1120 22:26:32.466970       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:26:32.481780       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 22:26:32.482017       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:26:32.486914       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:26:32.487633       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:26:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:26:32.707433       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:26:32.714124       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:26:32.714161       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:26:32.714669       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:27:02.707592       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:27:02.707593       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 22:27:02.715140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:27:02.715241       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1120 22:27:03.815175       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:27:03.815213       1 metrics.go:72] Registering metrics
	I1120 22:27:03.815303       1 controller.go:711] "Syncing nftables rules"
	I1120 22:27:12.711101       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 22:27:12.711155       1 main.go:301] handling current node
	I1120 22:27:22.707085       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 22:27:22.707118       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5edded9820b755f34e9b6d2593a3430839d72f1039a85a103ebda708afb8677] <==
	I1120 22:26:31.172038       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 22:26:31.172105       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:26:31.204935       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:26:31.205090       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 22:26:31.206600       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 22:26:31.206732       1 aggregator.go:171] initial CRD sync complete...
	I1120 22:26:31.206755       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 22:26:31.206761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:26:31.206768       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:26:31.210299       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:26:31.218675       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:26:31.241013       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 22:26:31.264698       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 22:26:31.595124       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:26:31.881351       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:26:32.800990       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:26:33.006110       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:26:33.096335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:26:33.121393       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:26:33.280509       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.160.218"}
	I1120 22:26:33.300816       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.76.226"}
	I1120 22:26:35.510154       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:26:35.707932       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:26:35.708071       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:26:35.914655       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0e18c657e0d1a0e87220cc83c18f4b5c5413a4677fa9b2ca5752a5267bead913] <==
	I1120 22:26:35.376889       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 22:26:35.376904       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 22:26:35.380422       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 22:26:35.380446       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:26:35.382645       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 22:26:35.383833       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 22:26:35.388022       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 22:26:35.390611       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 22:26:35.393938       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 22:26:35.398219       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 22:26:35.400663       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:26:35.401797       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 22:26:35.401841       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:26:35.401895       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:26:35.401946       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 22:26:35.402140       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 22:26:35.405544       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 22:26:35.408845       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 22:26:35.410777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 22:26:35.414325       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 22:26:35.420636       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 22:26:35.428979       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 22:26:35.430229       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:26:35.436399       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 22:26:35.439673       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [8c6434945bfead8d9b74fa7b85cd734ff1ff9683d7020d6b958ee4c50150bcba] <==
	I1120 22:26:33.022548       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:26:33.456961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:26:33.561118       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:26:33.561164       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 22:26:33.561249       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:26:33.747340       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:26:33.747521       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:26:33.755182       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:26:33.755706       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:26:33.756030       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:26:33.757718       1 config.go:200] "Starting service config controller"
	I1120 22:26:33.764366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:26:33.764546       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:26:33.764579       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:26:33.764661       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:26:33.764692       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:26:33.771905       1 config.go:309] "Starting node config controller"
	I1120 22:26:33.771931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:26:33.771938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:26:33.865149       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:26:33.865191       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:26:33.865236       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3b1fee8d5af72e2b534ec4e7ad37bec76a977b37fb8d8cd98bdabfae224ac824] <==
	I1120 22:26:30.895645       1 serving.go:386] Generated self-signed cert in-memory
	I1120 22:26:33.810459       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:26:33.810568       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:26:33.818671       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:26:33.818939       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 22:26:33.818961       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 22:26:33.819017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 22:26:33.822397       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:26:33.822423       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:26:33.822462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:26:33.822470       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:26:33.919541       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 22:26:33.923041       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:26:33.923943       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:26:36 embed-certs-270206 kubelet[781]: I1120 22:26:36.147876     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lx4b\" (UniqueName: \"kubernetes.io/projected/54738609-0716-4bbe-a7c8-f7bf920b502b-kube-api-access-5lx4b\") pod \"kubernetes-dashboard-855c9754f9-8zhp9\" (UID: \"54738609-0716-4bbe-a7c8-f7bf920b502b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8zhp9"
	Nov 20 22:26:36 embed-certs-270206 kubelet[781]: W1120 22:26:36.324781     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/crio-7cd8c4080f003c8815687e4aeeffce799f2fe9cc2cd6d5bbae2f4eac586e90ea WatchSource:0}: Error finding container 7cd8c4080f003c8815687e4aeeffce799f2fe9cc2cd6d5bbae2f4eac586e90ea: Status 404 returned error can't find the container with id 7cd8c4080f003c8815687e4aeeffce799f2fe9cc2cd6d5bbae2f4eac586e90ea
	Nov 20 22:26:36 embed-certs-270206 kubelet[781]: W1120 22:26:36.344572     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/crio-9cd2eaf0fc4fa90e311ed17107fd22bd25705958962f7ea9fc5bdfadf83063f9 WatchSource:0}: Error finding container 9cd2eaf0fc4fa90e311ed17107fd22bd25705958962f7ea9fc5bdfadf83063f9: Status 404 returned error can't find the container with id 9cd2eaf0fc4fa90e311ed17107fd22bd25705958962f7ea9fc5bdfadf83063f9
	Nov 20 22:26:36 embed-certs-270206 kubelet[781]: I1120 22:26:36.395253     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 22:26:41 embed-certs-270206 kubelet[781]: I1120 22:26:41.743063     781 scope.go:117] "RemoveContainer" containerID="21d05a77fd7d33df2240e28d291277fbcae5731f5eef27d2e434153552e9eef2"
	Nov 20 22:26:42 embed-certs-270206 kubelet[781]: I1120 22:26:42.751068     781 scope.go:117] "RemoveContainer" containerID="36010b6fe9896ccc9a4b1625abe5e841f76764fdd492d50d3386652d73dbd383"
	Nov 20 22:26:42 embed-certs-270206 kubelet[781]: E1120 22:26:42.751777     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:26:42 embed-certs-270206 kubelet[781]: I1120 22:26:42.752919     781 scope.go:117] "RemoveContainer" containerID="21d05a77fd7d33df2240e28d291277fbcae5731f5eef27d2e434153552e9eef2"
	Nov 20 22:26:46 embed-certs-270206 kubelet[781]: I1120 22:26:46.293022     781 scope.go:117] "RemoveContainer" containerID="36010b6fe9896ccc9a4b1625abe5e841f76764fdd492d50d3386652d73dbd383"
	Nov 20 22:26:46 embed-certs-270206 kubelet[781]: E1120 22:26:46.293230     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: I1120 22:26:59.558413     781 scope.go:117] "RemoveContainer" containerID="36010b6fe9896ccc9a4b1625abe5e841f76764fdd492d50d3386652d73dbd383"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: I1120 22:26:59.795134     781 scope.go:117] "RemoveContainer" containerID="36010b6fe9896ccc9a4b1625abe5e841f76764fdd492d50d3386652d73dbd383"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: I1120 22:26:59.795536     781 scope.go:117] "RemoveContainer" containerID="41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: E1120 22:26:59.795727     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: I1120 22:26:59.830257     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8zhp9" podStartSLOduration=15.222341075 podStartE2EDuration="24.829529533s" podCreationTimestamp="2025-11-20 22:26:35 +0000 UTC" firstStartedPulling="2025-11-20 22:26:36.347254671 +0000 UTC m=+10.964724133" lastFinishedPulling="2025-11-20 22:26:45.954443121 +0000 UTC m=+20.571912591" observedRunningTime="2025-11-20 22:26:46.774506333 +0000 UTC m=+21.391975795" watchObservedRunningTime="2025-11-20 22:26:59.829529533 +0000 UTC m=+34.446998994"
	Nov 20 22:27:03 embed-certs-270206 kubelet[781]: I1120 22:27:03.810697     781 scope.go:117] "RemoveContainer" containerID="afa79562f9c94f1b51124ed05b060d5c7eaec4ead64b1bbcceb4670611f5c443"
	Nov 20 22:27:06 embed-certs-270206 kubelet[781]: I1120 22:27:06.292875     781 scope.go:117] "RemoveContainer" containerID="41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f"
	Nov 20 22:27:06 embed-certs-270206 kubelet[781]: E1120 22:27:06.293510     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:27:20 embed-certs-270206 kubelet[781]: I1120 22:27:20.557354     781 scope.go:117] "RemoveContainer" containerID="41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f"
	Nov 20 22:27:20 embed-certs-270206 kubelet[781]: I1120 22:27:20.857801     781 scope.go:117] "RemoveContainer" containerID="41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f"
	Nov 20 22:27:20 embed-certs-270206 kubelet[781]: I1120 22:27:20.858491     781 scope.go:117] "RemoveContainer" containerID="cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab"
	Nov 20 22:27:20 embed-certs-270206 kubelet[781]: E1120 22:27:20.858894     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:27:22 embed-certs-270206 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:27:22 embed-certs-270206 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:27:22 embed-certs-270206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bf57dfc57e7549a44597fd1849b61c6c486546fa7ec4348a7ed3ac28731fa817] <==
	2025/11/20 22:26:46 Using namespace: kubernetes-dashboard
	2025/11/20 22:26:46 Using in-cluster config to connect to apiserver
	2025/11/20 22:26:46 Using secret token for csrf signing
	2025/11/20 22:26:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 22:26:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 22:26:46 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 22:26:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 22:26:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 22:26:46 Generating JWE encryption key
	2025/11/20 22:26:46 Initializing JWE encryption key from synchronized object
	2025/11/20 22:26:46 Creating in-cluster Sidecar client
	2025/11/20 22:26:46 Serving insecurely on HTTP port: 9090
	2025/11/20 22:26:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:27:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:26:46 Starting overwatch
	
	
	==> storage-provisioner [644045e039c8edbef28f05e2081e02e88e7668ac9e011777e75d8215f8ad38fa] <==
	I1120 22:27:03.898326       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 22:27:03.921279       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:27:03.921332       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 22:27:03.924089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:07.399458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:11.659553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:15.259753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:18.315174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:21.339202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:21.352652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:27:21.352860       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:27:21.353104       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-270206_dca390e0-431a-446f-a081-98ec10697b9b!
	I1120 22:27:21.358024       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f3cea17-cd64-4701-9269-df7a7dbcb868", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-270206_dca390e0-431a-446f-a081-98ec10697b9b became leader
	W1120 22:27:21.370059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:21.383076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:27:21.458093       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-270206_dca390e0-431a-446f-a081-98ec10697b9b!
	W1120 22:27:23.389824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:23.401692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:25.410296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:25.416416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [afa79562f9c94f1b51124ed05b060d5c7eaec4ead64b1bbcceb4670611f5c443] <==
	I1120 22:26:32.836600       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 22:27:02.839970       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-270206 -n embed-certs-270206
E1120 22:27:26.482022  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-270206 -n embed-certs-270206: exit status 2 (506.531886ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-270206 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-270206
helpers_test.go:243: (dbg) docker inspect embed-certs-270206:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd",
	        "Created": "2025-11-20T22:24:33.33301512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1034786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:26:18.688600985Z",
	            "FinishedAt": "2025-11-20T22:26:17.274402434Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/hosts",
	        "LogPath": "/var/lib/docker/containers/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd-json.log",
	        "Name": "/embed-certs-270206",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-270206:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-270206",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd",
	                "LowerDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fb802314e5895d034585e3d5b88776b2d0a768144718b7bdbe22d8407ab2ed6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-270206",
	                "Source": "/var/lib/docker/volumes/embed-certs-270206/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-270206",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-270206",
	                "name.minikube.sigs.k8s.io": "embed-certs-270206",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfa5f0301d4a5415935f6b792940a8c21a62ea07e578b8e3707c6127632bd68a",
	            "SandboxKey": "/var/run/docker/netns/dfa5f0301d4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34186"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34184"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34185"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-270206": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:89:03:33:43:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ffd59b794c532505e054cacac90fc1087646ff0df0b0ac27f388edeea26b442",
	                    "EndpointID": "e2ec7b597ebf174fd24ee8b35cefc6ebc009614823c1de44c0a26bed30bbb405",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-270206",
	                        "155df8ef967b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270206 -n embed-certs-270206
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270206 -n embed-certs-270206: exit status 2 (490.045889ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-270206 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-270206 logs -n 25: (1.664349951s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:22 UTC │ 20 Nov 25 22:23 UTC │
	│ image   │ old-k8s-version-443192 image list --format=json                                                                                                                                                                                               │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ pause   │ -p old-k8s-version-443192 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │                     │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:24 UTC │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:25 UTC │
	│ delete  │ -p cert-expiration-420078                                                                                                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:24 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-559701 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ stop    │ -p embed-certs-270206 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-270206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:27 UTC │
	│ image   │ default-k8s-diff-port-559701 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p disable-driver-mounts-305138                                                                                                                                                                                                               │ disable-driver-mounts-305138 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	│ image   │ embed-certs-270206 image list --format=json                                                                                                                                                                                                   │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ pause   │ -p embed-certs-270206 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:27:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:27:05.087368 1038356 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:27:05.087545 1038356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:27:05.087568 1038356 out.go:374] Setting ErrFile to fd 2...
	I1120 22:27:05.087586 1038356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:27:05.087966 1038356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:27:05.088450 1038356 out.go:368] Setting JSON to false
	I1120 22:27:05.089479 1038356 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18550,"bootTime":1763659075,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:27:05.089579 1038356 start.go:143] virtualization:  
	I1120 22:27:05.091027 1038356 out.go:179] * [no-preload-041029] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:27:05.092086 1038356 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:27:05.093190 1038356 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:27:05.094230 1038356 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:27:05.095246 1038356 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:27:05.096351 1038356 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:27:05.096506 1038356 notify.go:221] Checking for updates...
	I1120 22:27:05.099461 1038356 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:27:05.101396 1038356 config.go:182] Loaded profile config "embed-certs-270206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:27:05.101603 1038356 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:27:05.124961 1038356 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:27:05.125097 1038356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:27:05.199833 1038356 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:27:05.189377596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:27:05.199940 1038356 docker.go:319] overlay module found
	I1120 22:27:05.201128 1038356 out.go:179] * Using the docker driver based on user configuration
	I1120 22:27:05.202103 1038356 start.go:309] selected driver: docker
	I1120 22:27:05.202117 1038356 start.go:930] validating driver "docker" against <nil>
	I1120 22:27:05.202130 1038356 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:27:05.202836 1038356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:27:05.261731 1038356 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:27:05.252347562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:27:05.261901 1038356 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 22:27:05.262134 1038356 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:27:05.263318 1038356 out.go:179] * Using Docker driver with root privileges
	I1120 22:27:05.264352 1038356 cni.go:84] Creating CNI manager for ""
	I1120 22:27:05.264418 1038356 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:27:05.264433 1038356 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 22:27:05.264512 1038356 start.go:353] cluster config:
	{Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:27:05.265853 1038356 out.go:179] * Starting "no-preload-041029" primary control-plane node in "no-preload-041029" cluster
	I1120 22:27:05.266826 1038356 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:27:05.268051 1038356 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:27:05.269086 1038356 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:27:05.269156 1038356 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:27:05.269216 1038356 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json ...
	I1120 22:27:05.269246 1038356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json: {Name:mkd1b9589e6da64d2e37f22e104fdda2b4bf8f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:05.272331 1038356 cache.go:107] acquiring lock: {Name:mkc179cc367be18f686b3ff0d25d7c0a4d38107a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.272537 1038356 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:05.272775 1038356 cache.go:107] acquiring lock: {Name:mk5ddbac06bb4c58e0829e32dc3cac2e0f3d3484 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.272988 1038356 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:05.273271 1038356 cache.go:107] acquiring lock: {Name:mk6473ff5661413ee7b260344002f555ac817d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.273384 1038356 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:05.273657 1038356 cache.go:107] acquiring lock: {Name:mk452c1826f4ea2a7476e6cd709c98ef1de14eae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.273748 1038356 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:05.273983 1038356 cache.go:107] acquiring lock: {Name:mk1e9e4e31f0a8424c64380df7184f5c5bff61db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.274062 1038356 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:05.274166 1038356 cache.go:107] acquiring lock: {Name:mk2d31e05763b1401b87a3347e71140539ad5cd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.274229 1038356 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 22:27:05.275004 1038356 cache.go:107] acquiring lock: {Name:mkfe8a3234fd2567b981ed2e943c252800f37788 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.275104 1038356 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1120 22:27:05.275115 1038356 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 5.636906ms
	I1120 22:27:05.275122 1038356 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1120 22:27:05.275137 1038356 cache.go:107] acquiring lock: {Name:mk7bd038abefa117c730983c9f9ea84fc4100cef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.275232 1038356 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:05.276380 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:05.276474 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:05.277155 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:05.277339 1038356 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:05.278469 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:05.278712 1038356 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 22:27:05.278859 1038356 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:05.296837 1038356 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:27:05.296863 1038356 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:27:05.296876 1038356 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:27:05.296902 1038356 start.go:360] acquireMachinesLock for no-preload-041029: {Name:mk272b44e31f3ea0985bee4020b0ba7b3af4d70d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:05.297012 1038356 start.go:364] duration metric: took 93.367µs to acquireMachinesLock for "no-preload-041029"
	I1120 22:27:05.297038 1038356 start.go:93] Provisioning new machine with config: &{Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:27:05.297103 1038356 start.go:125] createHost starting for "" (driver="docker")
	W1120 22:27:04.912269 1034660 pod_ready.go:104] pod "coredns-66bc5c9577-c5cg5" is not "Ready", error: <nil>
	I1120 22:27:06.916846 1034660 pod_ready.go:94] pod "coredns-66bc5c9577-c5cg5" is "Ready"
	I1120 22:27:06.916881 1034660 pod_ready.go:86] duration metric: took 33.512227193s for pod "coredns-66bc5c9577-c5cg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.927126 1034660 pod_ready.go:83] waiting for pod "etcd-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.948454 1034660 pod_ready.go:94] pod "etcd-embed-certs-270206" is "Ready"
	I1120 22:27:06.948481 1034660 pod_ready.go:86] duration metric: took 21.332246ms for pod "etcd-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.967431 1034660 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.993132 1034660 pod_ready.go:94] pod "kube-apiserver-embed-certs-270206" is "Ready"
	I1120 22:27:06.993155 1034660 pod_ready.go:86] duration metric: took 25.700449ms for pod "kube-apiserver-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:06.998592 1034660 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:07.114757 1034660 pod_ready.go:94] pod "kube-controller-manager-embed-certs-270206" is "Ready"
	I1120 22:27:07.114780 1034660 pod_ready.go:86] duration metric: took 116.166794ms for pod "kube-controller-manager-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:07.316226 1034660 pod_ready.go:83] waiting for pod "kube-proxy-9d84b" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:07.708692 1034660 pod_ready.go:94] pod "kube-proxy-9d84b" is "Ready"
	I1120 22:27:07.708714 1034660 pod_ready.go:86] duration metric: took 392.467084ms for pod "kube-proxy-9d84b" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:07.908924 1034660 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:08.310054 1034660 pod_ready.go:94] pod "kube-scheduler-embed-certs-270206" is "Ready"
	I1120 22:27:08.310087 1034660 pod_ready.go:86] duration metric: took 401.119027ms for pod "kube-scheduler-embed-certs-270206" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:27:08.310100 1034660 pod_ready.go:40] duration metric: took 34.910385639s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:27:08.402732 1034660 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:27:08.407895 1034660 out.go:179] * Done! kubectl is now configured to use "embed-certs-270206" cluster and "default" namespace by default
	I1120 22:27:05.298830 1038356 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 22:27:05.299099 1038356 start.go:159] libmachine.API.Create for "no-preload-041029" (driver="docker")
	I1120 22:27:05.299145 1038356 client.go:173] LocalClient.Create starting
	I1120 22:27:05.299211 1038356 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem
	I1120 22:27:05.299243 1038356 main.go:143] libmachine: Decoding PEM data...
	I1120 22:27:05.299262 1038356 main.go:143] libmachine: Parsing certificate...
	I1120 22:27:05.299325 1038356 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem
	I1120 22:27:05.299348 1038356 main.go:143] libmachine: Decoding PEM data...
	I1120 22:27:05.299369 1038356 main.go:143] libmachine: Parsing certificate...
	I1120 22:27:05.299726 1038356 cli_runner.go:164] Run: docker network inspect no-preload-041029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 22:27:05.326197 1038356 cli_runner.go:211] docker network inspect no-preload-041029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 22:27:05.326304 1038356 network_create.go:284] running [docker network inspect no-preload-041029] to gather additional debugging logs...
	I1120 22:27:05.326337 1038356 cli_runner.go:164] Run: docker network inspect no-preload-041029
	W1120 22:27:05.341001 1038356 cli_runner.go:211] docker network inspect no-preload-041029 returned with exit code 1
	I1120 22:27:05.341036 1038356 network_create.go:287] error running [docker network inspect no-preload-041029]: docker network inspect no-preload-041029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-041029 not found
	I1120 22:27:05.341071 1038356 network_create.go:289] output of [docker network inspect no-preload-041029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-041029 not found
	
	** /stderr **
	I1120 22:27:05.341185 1038356 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:27:05.360359 1038356 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad232b357b1b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:e5:2b:94:2e:bb} reservation:<nil>}
	I1120 22:27:05.360752 1038356 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6d47b47b5eb7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:61:6b:56:c9:db} reservation:<nil>}
	I1120 22:27:05.361129 1038356 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8999df1e8509 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:04:87:b7:55:e1} reservation:<nil>}
	I1120 22:27:05.361463 1038356 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ffd59b794c5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:58:8a:b8:8c:c5} reservation:<nil>}
	I1120 22:27:05.361998 1038356 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c1e240}
	I1120 22:27:05.362025 1038356 network_create.go:124] attempt to create docker network no-preload-041029 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1120 22:27:05.362090 1038356 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-041029 no-preload-041029
	I1120 22:27:05.441331 1038356 network_create.go:108] docker network no-preload-041029 192.168.85.0/24 created
	I1120 22:27:05.441385 1038356 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-041029" container
	I1120 22:27:05.441461 1038356 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 22:27:05.459550 1038356 cli_runner.go:164] Run: docker volume create no-preload-041029 --label name.minikube.sigs.k8s.io=no-preload-041029 --label created_by.minikube.sigs.k8s.io=true
	I1120 22:27:05.477400 1038356 oci.go:103] Successfully created a docker volume no-preload-041029
	I1120 22:27:05.477494 1038356 cli_runner.go:164] Run: docker run --rm --name no-preload-041029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-041029 --entrypoint /usr/bin/test -v no-preload-041029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 22:27:05.739995 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 22:27:05.750505 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 22:27:05.760166 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 22:27:05.764299 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 22:27:05.803894 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1120 22:27:05.824064 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1120 22:27:05.871696 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1120 22:27:05.871725 1038356 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 597.559436ms
	I1120 22:27:05.871740 1038356 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1120 22:27:05.920522 1038356 cache.go:162] opening:  /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 22:27:06.123869 1038356 oci.go:107] Successfully prepared a docker volume no-preload-041029
	I1120 22:27:06.123918 1038356 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1120 22:27:06.124073 1038356 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 22:27:06.124252 1038356 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 22:27:06.198471 1038356 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-041029 --name no-preload-041029 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-041029 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-041029 --network no-preload-041029 --ip 192.168.85.2 --volume no-preload-041029:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 22:27:06.274474 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1120 22:27:06.274500 1038356 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 1.004679139s
	I1120 22:27:06.274515 1038356 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1120 22:27:06.662718 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Running}}
	I1120 22:27:06.745639 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:27:06.840940 1038356 cli_runner.go:164] Run: docker exec no-preload-041029 stat /var/lib/dpkg/alternatives/iptables
	I1120 22:27:06.925543 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1120 22:27:06.932315 1038356 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.65903659s
	I1120 22:27:06.932384 1038356 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1120 22:27:06.960024 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1120 22:27:06.960055 1038356 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.684918046s
	I1120 22:27:06.960070 1038356 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1120 22:27:07.017584 1038356 oci.go:144] the created container "no-preload-041029" has a running status.
	I1120 22:27:07.017611 1038356 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa...
	I1120 22:27:07.051381 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1120 22:27:07.051462 1038356 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.777807379s
	I1120 22:27:07.051494 1038356 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1120 22:27:07.139612 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1120 22:27:07.139666 1038356 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.866872541s
	I1120 22:27:07.139682 1038356 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1120 22:27:07.572834 1038356 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 22:27:07.593776 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:27:07.613275 1038356 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 22:27:07.613299 1038356 kic_runner.go:114] Args: [docker exec --privileged no-preload-041029 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 22:27:07.663817 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:27:07.682274 1038356 machine.go:94] provisionDockerMachine start ...
	I1120 22:27:07.682370 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:07.704361 1038356 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:07.704700 1038356 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1120 22:27:07.704711 1038356 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:27:07.705669 1038356 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:27:08.249914 1038356 cache.go:157] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1120 22:27:08.249945 1038356 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.975963796s
	I1120 22:27:08.249958 1038356 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1120 22:27:08.249978 1038356 cache.go:87] Successfully saved all images to host disk.
	I1120 22:27:10.846697 1038356 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-041029
	
	I1120 22:27:10.846723 1038356 ubuntu.go:182] provisioning hostname "no-preload-041029"
	I1120 22:27:10.846804 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:10.865636 1038356 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:10.865968 1038356 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1120 22:27:10.865985 1038356 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-041029 && echo "no-preload-041029" | sudo tee /etc/hostname
	I1120 22:27:11.021041 1038356 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-041029
	
	I1120 22:27:11.021146 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:11.040377 1038356 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:11.040695 1038356 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1120 22:27:11.040718 1038356 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-041029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-041029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-041029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:27:11.191443 1038356 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:27:11.191477 1038356 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:27:11.191509 1038356 ubuntu.go:190] setting up certificates
	I1120 22:27:11.191524 1038356 provision.go:84] configureAuth start
	I1120 22:27:11.191612 1038356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:27:11.215419 1038356 provision.go:143] copyHostCerts
	I1120 22:27:11.215501 1038356 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:27:11.215516 1038356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:27:11.215595 1038356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:27:11.215696 1038356 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:27:11.215706 1038356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:27:11.215734 1038356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:27:11.215793 1038356 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:27:11.215803 1038356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:27:11.215833 1038356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:27:11.215913 1038356 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.no-preload-041029 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-041029]
	I1120 22:27:11.598437 1038356 provision.go:177] copyRemoteCerts
	I1120 22:27:11.598513 1038356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:27:11.598558 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:11.619210 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:11.722780 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:27:11.742136 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 22:27:11.760829 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:27:11.777944 1038356 provision.go:87] duration metric: took 586.396207ms to configureAuth
	I1120 22:27:11.778012 1038356 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:27:11.778207 1038356 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:27:11.778326 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:11.795127 1038356 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:11.795465 1038356 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1120 22:27:11.795488 1038356 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:27:12.186262 1038356 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:27:12.186328 1038356 machine.go:97] duration metric: took 4.504031717s to provisionDockerMachine
	I1120 22:27:12.186356 1038356 client.go:176] duration metric: took 6.88720335s to LocalClient.Create
	I1120 22:27:12.186398 1038356 start.go:167] duration metric: took 6.887299762s to libmachine.API.Create "no-preload-041029"
	I1120 22:27:12.186427 1038356 start.go:293] postStartSetup for "no-preload-041029" (driver="docker")
	I1120 22:27:12.186467 1038356 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:27:12.186559 1038356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:27:12.186616 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:12.206898 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:12.307458 1038356 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:27:12.310764 1038356 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:27:12.310795 1038356 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:27:12.310807 1038356 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:27:12.310880 1038356 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:27:12.310962 1038356 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:27:12.311101 1038356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:27:12.318767 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:27:12.338887 1038356 start.go:296] duration metric: took 152.415415ms for postStartSetup
	I1120 22:27:12.339323 1038356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:27:12.356573 1038356 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json ...
	I1120 22:27:12.356855 1038356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:27:12.356908 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:12.374298 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:12.471917 1038356 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:27:12.480443 1038356 start.go:128] duration metric: took 7.183324425s to createHost
	I1120 22:27:12.480470 1038356 start.go:83] releasing machines lock for "no-preload-041029", held for 7.183449956s
	I1120 22:27:12.480546 1038356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:27:12.498616 1038356 ssh_runner.go:195] Run: cat /version.json
	I1120 22:27:12.498706 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:12.499072 1038356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:27:12.499143 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:27:12.521390 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:12.546276 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:27:12.631084 1038356 ssh_runner.go:195] Run: systemctl --version
	I1120 22:27:12.740768 1038356 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:27:12.779874 1038356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:27:12.784823 1038356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:27:12.784899 1038356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:27:12.821206 1038356 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 22:27:12.821286 1038356 start.go:496] detecting cgroup driver to use...
	I1120 22:27:12.821350 1038356 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:27:12.821439 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:27:12.841892 1038356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:27:12.854802 1038356 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:27:12.854959 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:27:12.873831 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:27:12.893276 1038356 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:27:13.021890 1038356 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:27:13.164628 1038356 docker.go:234] disabling docker service ...
	I1120 22:27:13.164774 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:27:13.188650 1038356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:27:13.202328 1038356 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:27:13.321507 1038356 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:27:13.450921 1038356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:27:13.465329 1038356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:27:13.481208 1038356 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:27:13.481330 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.490246 1038356 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:27:13.490362 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.499552 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.508453 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.517793 1038356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:27:13.526604 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.535559 1038356 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.549911 1038356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:13.564838 1038356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:27:13.574295 1038356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:27:13.582363 1038356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:27:13.721674 1038356 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:27:13.906314 1038356 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:27:13.906443 1038356 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:27:13.910513 1038356 start.go:564] Will wait 60s for crictl version
	I1120 22:27:13.910583 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:13.914515 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:27:13.942015 1038356 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:27:13.942119 1038356 ssh_runner.go:195] Run: crio --version
	I1120 22:27:13.975100 1038356 ssh_runner.go:195] Run: crio --version
	I1120 22:27:14.008047 1038356 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:27:14.011101 1038356 cli_runner.go:164] Run: docker network inspect no-preload-041029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:27:14.028870 1038356 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:27:14.033306 1038356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:27:14.043775 1038356 kubeadm.go:884] updating cluster {Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:27:14.043899 1038356 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:27:14.043957 1038356 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:27:14.071070 1038356 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 22:27:14.071098 1038356 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1120 22:27:14.071160 1038356 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:14.071396 1038356 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.071498 1038356 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.071594 1038356 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.071683 1038356 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.071771 1038356 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 22:27:14.071867 1038356 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.071959 1038356 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.072943 1038356 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 22:27:14.073175 1038356 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.073293 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.073413 1038356 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:14.073785 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.073893 1038356 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.073981 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.074146 1038356 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.332761 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1120 22:27:14.333236 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.333443 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.349275 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.349415 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.387736 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.396753 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.482904 1038356 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1120 22:27:14.482947 1038356 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1120 22:27:14.483177 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.483063 1038356 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1120 22:27:14.483272 1038356 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.483305 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.483093 1038356 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1120 22:27:14.483341 1038356 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.483362 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.483486 1038356 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1120 22:27:14.483520 1038356 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.483719 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.507201 1038356 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1120 22:27:14.507238 1038356 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.507289 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.507392 1038356 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1120 22:27:14.507409 1038356 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.507433 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.522773 1038356 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1120 22:27:14.523086 1038356 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.522868 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.522893 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 22:27:14.522936 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.522970 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.523011 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.523038 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.523273 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:14.613216 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.618902 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.619065 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.619075 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.619171 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.619229 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 22:27:14.627194 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.719310 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.729329 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 22:27:14.729431 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 22:27:14.729519 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 22:27:14.729609 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 22:27:14.732496 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 22:27:14.732605 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 22:27:14.788986 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 22:27:14.822801 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 22:27:14.822897 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 22:27:14.823000 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1120 22:27:14.823084 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 22:27:14.823086 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 22:27:14.823131 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1120 22:27:14.823155 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 22:27:14.823174 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1120 22:27:14.835528 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 22:27:14.835692 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 22:27:14.863799 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 22:27:14.863968 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 22:27:14.864075 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1120 22:27:14.864159 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1120 22:27:14.864303 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1120 22:27:14.864487 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1120 22:27:14.864386 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1120 22:27:14.864547 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1120 22:27:14.864415 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1120 22:27:14.864597 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1120 22:27:14.864438 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1120 22:27:14.864624 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1120 22:27:14.864457 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1120 22:27:14.864649 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1120 22:27:14.920308 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1120 22:27:14.920348 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1120 22:27:14.920394 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1120 22:27:14.920494 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1120 22:27:14.961042 1038356 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1120 22:27:14.961397 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1120 22:27:15.419113 1038356 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1120 22:27:15.419151 1038356 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 22:27:15.419229 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1120 22:27:15.463951 1038356 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1120 22:27:15.464196 1038356 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:17.419243 1038356 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.999985037s)
	I1120 22:27:17.419341 1038356 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1120 22:27:17.419276 1038356 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.955031285s)
	I1120 22:27:17.419463 1038356 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1120 22:27:17.419400 1038356 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1120 22:27:17.419515 1038356 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:17.419612 1038356 ssh_runner.go:195] Run: which crictl
	I1120 22:27:17.419617 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1120 22:27:17.424825 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:19.103376 1038356 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.678474934s)
	I1120 22:27:19.103366 1038356 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.683636214s)
	I1120 22:27:19.103423 1038356 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1120 22:27:19.103449 1038356 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 22:27:19.103479 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:19.103507 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 22:27:20.304522 1038356 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.200991931s)
	I1120 22:27:20.304545 1038356 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.201033818s)
	I1120 22:27:20.304551 1038356 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1120 22:27:20.304625 1038356 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:20.304627 1038356 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 22:27:20.304668 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 22:27:22.318464 1038356 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.013765291s)
	I1120 22:27:22.318496 1038356 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1120 22:27:22.318517 1038356 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 22:27:22.318471 1038356 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.01383195s)
	I1120 22:27:22.318566 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 22:27:22.318589 1038356 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1120 22:27:22.318667 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1120 22:27:24.023670 1038356 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.704977928s)
	I1120 22:27:24.023714 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1120 22:27:24.023742 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1120 22:27:24.023933 1038356 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.705353564s)
	I1120 22:27:24.023947 1038356 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1120 22:27:24.023967 1038356 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1120 22:27:24.024020 1038356 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	
	
	==> CRI-O <==
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.715504745Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.718769419Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.71881129Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.718838999Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.721815235Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.721848006Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.721864778Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.724936079Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.724970229Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.724986762Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.728646879Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:27:12 embed-certs-270206 crio[654]: time="2025-11-20T22:27:12.728681596Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.558059868Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0fbad06d-2aab-436d-a14a-ff929b1ec827 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.559427787Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a3841d0d-734e-461d-b1ef-102fdd58e10f name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.560573968Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9/dashboard-metrics-scraper" id=11e0202d-97be-4e57-b496-b0981d2db0ac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.560671594Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.56842746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.570325264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.608487535Z" level=info msg="Created container cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9/dashboard-metrics-scraper" id=11e0202d-97be-4e57-b496-b0981d2db0ac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.613813077Z" level=info msg="Starting container: cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab" id=47f54bd4-fd25-429c-8f69-246ff39f35f2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.617325655Z" level=info msg="Started container" PID=1737 containerID=cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9/dashboard-metrics-scraper id=47f54bd4-fd25-429c-8f69-246ff39f35f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7cd8c4080f003c8815687e4aeeffce799f2fe9cc2cd6d5bbae2f4eac586e90ea
	Nov 20 22:27:20 embed-certs-270206 conmon[1735]: conmon cea1f61272e3fca822e4 <ninfo>: container 1737 exited with status 1
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.876180314Z" level=info msg="Removing container: 41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f" id=587cec1e-30af-4013-8310-67e2c79f63f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.886663731Z" level=info msg="Error loading conmon cgroup of container 41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f: cgroup deleted" id=587cec1e-30af-4013-8310-67e2c79f63f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:27:20 embed-certs-270206 crio[654]: time="2025-11-20T22:27:20.90164771Z" level=info msg="Removed container 41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9/dashboard-metrics-scraper" id=587cec1e-30af-4013-8310-67e2c79f63f8 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cea1f61272e3f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   3                   7cd8c4080f003       dashboard-metrics-scraper-6ffb444bf9-7kbn9   kubernetes-dashboard
	644045e039c8e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   38fda02f4f1b4       storage-provisioner                          kube-system
	bf57dfc57e754       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   9cd2eaf0fc4fa       kubernetes-dashboard-855c9754f9-8zhp9        kubernetes-dashboard
	e71b73690cd58       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   4bb6f548a779d       coredns-66bc5c9577-c5cg5                     kube-system
	d3a4faf36bc29       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   313ffda348a94       busybox                                      default
	afa79562f9c94       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   38fda02f4f1b4       storage-provisioner                          kube-system
	345088aa6124d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   b470247ad86d4       kindnet-9sqjv                                kube-system
	8c6434945bfea       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   a820db1c86e1c       kube-proxy-9d84b                             kube-system
	3b1fee8d5af72       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   534934c41d64a       kube-scheduler-embed-certs-270206            kube-system
	0e18c657e0d1a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   2fccc67f292c0       kube-controller-manager-embed-certs-270206   kube-system
	ea0c8d065057f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   84e8908bbf69a       etcd-embed-certs-270206                      kube-system
	a5edded9820b7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f6e10945c0b6f       kube-apiserver-embed-certs-270206            kube-system
	
	
	==> coredns [e71b73690cd58a0e2ae007ea5eee09f437d2a6e6614e83f0ae2f01702549a622] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57018 - 61497 "HINFO IN 4775963711003691506.2448520270965228608. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025161677s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-270206
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-270206
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-270206
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_25_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:24:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-270206
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:27:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:27:02 +0000   Thu, 20 Nov 2025 22:24:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:27:02 +0000   Thu, 20 Nov 2025 22:24:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:27:02 +0000   Thu, 20 Nov 2025 22:24:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:27:02 +0000   Thu, 20 Nov 2025 22:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-270206
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                484a9a63-7f62-411b-a1d5-b7485838eb61
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-c5cg5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-embed-certs-270206                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-9sqjv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-270206             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-embed-certs-270206    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-9d84b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-270206             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7kbn9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8zhp9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m34s)  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m34s)  kubelet          Node embed-certs-270206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m34s)  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node embed-certs-270206 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node embed-certs-270206 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node embed-certs-270206 event: Registered Node embed-certs-270206 in Controller
	  Normal   NodeReady                98s                    kubelet          Node embed-certs-270206 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node embed-certs-270206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node embed-certs-270206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node embed-certs-270206 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node embed-certs-270206 event: Registered Node embed-certs-270206 in Controller
	
	
	==> dmesg <==
	[Nov20 22:03] overlayfs: idmapped layers are currently not supported
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	[Nov20 22:27] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ea0c8d065057f3665d6ec3035564aee5d8e6850f708052453e6159677f28f712] <==
	{"level":"warn","ts":"2025-11-20T22:26:29.480368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.488598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.527947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.551991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.558030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.586798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.600557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.623332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.637202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.706407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.713374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.721873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.739953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.750261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.767516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.798831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.807675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.828408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.868487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.887283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.920232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.944918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.962143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:29.981335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:26:30.046392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37130","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:27:28 up  5:09,  0 user,  load average: 3.23, 3.33, 2.72
	Linux embed-certs-270206 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [345088aa6124d02a9931e7016c0a1f09f4824adfef2e5d2fd4e64bda6a242344] <==
	I1120 22:26:32.466970       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:26:32.481780       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 22:26:32.482017       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:26:32.486914       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:26:32.487633       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:26:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:26:32.707433       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:26:32.714124       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:26:32.714161       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:26:32.714669       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:27:02.707592       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:27:02.707593       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 22:27:02.715140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:27:02.715241       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1120 22:27:03.815175       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:27:03.815213       1 metrics.go:72] Registering metrics
	I1120 22:27:03.815303       1 controller.go:711] "Syncing nftables rules"
	I1120 22:27:12.711101       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 22:27:12.711155       1 main.go:301] handling current node
	I1120 22:27:22.707085       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 22:27:22.707118       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5edded9820b755f34e9b6d2593a3430839d72f1039a85a103ebda708afb8677] <==
	I1120 22:26:31.172038       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 22:26:31.172105       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:26:31.204935       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:26:31.205090       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 22:26:31.206600       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 22:26:31.206732       1 aggregator.go:171] initial CRD sync complete...
	I1120 22:26:31.206755       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 22:26:31.206761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:26:31.206768       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:26:31.210299       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:26:31.218675       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:26:31.241013       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 22:26:31.264698       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 22:26:31.595124       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:26:31.881351       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:26:32.800990       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:26:33.006110       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:26:33.096335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:26:33.121393       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:26:33.280509       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.160.218"}
	I1120 22:26:33.300816       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.76.226"}
	I1120 22:26:35.510154       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:26:35.707932       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:26:35.708071       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:26:35.914655       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0e18c657e0d1a0e87220cc83c18f4b5c5413a4677fa9b2ca5752a5267bead913] <==
	I1120 22:26:35.376889       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 22:26:35.376904       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 22:26:35.380422       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 22:26:35.380446       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:26:35.382645       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 22:26:35.383833       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 22:26:35.388022       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 22:26:35.390611       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 22:26:35.393938       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 22:26:35.398219       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 22:26:35.400663       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:26:35.401797       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 22:26:35.401841       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:26:35.401895       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:26:35.401946       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 22:26:35.402140       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 22:26:35.405544       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 22:26:35.408845       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 22:26:35.410777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 22:26:35.414325       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 22:26:35.420636       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 22:26:35.428979       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 22:26:35.430229       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:26:35.436399       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 22:26:35.439673       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [8c6434945bfead8d9b74fa7b85cd734ff1ff9683d7020d6b958ee4c50150bcba] <==
	I1120 22:26:33.022548       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:26:33.456961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:26:33.561118       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:26:33.561164       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 22:26:33.561249       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:26:33.747340       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:26:33.747521       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:26:33.755182       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:26:33.755706       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:26:33.756030       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:26:33.757718       1 config.go:200] "Starting service config controller"
	I1120 22:26:33.764366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:26:33.764546       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:26:33.764579       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:26:33.764661       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:26:33.764692       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:26:33.771905       1 config.go:309] "Starting node config controller"
	I1120 22:26:33.771931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:26:33.771938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:26:33.865149       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:26:33.865191       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:26:33.865236       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3b1fee8d5af72e2b534ec4e7ad37bec76a977b37fb8d8cd98bdabfae224ac824] <==
	I1120 22:26:30.895645       1 serving.go:386] Generated self-signed cert in-memory
	I1120 22:26:33.810459       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:26:33.810568       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:26:33.818671       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:26:33.818939       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 22:26:33.818961       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 22:26:33.819017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 22:26:33.822397       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:26:33.822423       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:26:33.822462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:26:33.822470       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:26:33.919541       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 22:26:33.923041       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:26:33.923943       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:26:36 embed-certs-270206 kubelet[781]: I1120 22:26:36.147876     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lx4b\" (UniqueName: \"kubernetes.io/projected/54738609-0716-4bbe-a7c8-f7bf920b502b-kube-api-access-5lx4b\") pod \"kubernetes-dashboard-855c9754f9-8zhp9\" (UID: \"54738609-0716-4bbe-a7c8-f7bf920b502b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8zhp9"
	Nov 20 22:26:36 embed-certs-270206 kubelet[781]: W1120 22:26:36.324781     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/crio-7cd8c4080f003c8815687e4aeeffce799f2fe9cc2cd6d5bbae2f4eac586e90ea WatchSource:0}: Error finding container 7cd8c4080f003c8815687e4aeeffce799f2fe9cc2cd6d5bbae2f4eac586e90ea: Status 404 returned error can't find the container with id 7cd8c4080f003c8815687e4aeeffce799f2fe9cc2cd6d5bbae2f4eac586e90ea
	Nov 20 22:26:36 embed-certs-270206 kubelet[781]: W1120 22:26:36.344572     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/155df8ef967b904c6c819dee753e53eead8fd0f99a77c33279c7b3617c1c89fd/crio-9cd2eaf0fc4fa90e311ed17107fd22bd25705958962f7ea9fc5bdfadf83063f9 WatchSource:0}: Error finding container 9cd2eaf0fc4fa90e311ed17107fd22bd25705958962f7ea9fc5bdfadf83063f9: Status 404 returned error can't find the container with id 9cd2eaf0fc4fa90e311ed17107fd22bd25705958962f7ea9fc5bdfadf83063f9
	Nov 20 22:26:36 embed-certs-270206 kubelet[781]: I1120 22:26:36.395253     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 22:26:41 embed-certs-270206 kubelet[781]: I1120 22:26:41.743063     781 scope.go:117] "RemoveContainer" containerID="21d05a77fd7d33df2240e28d291277fbcae5731f5eef27d2e434153552e9eef2"
	Nov 20 22:26:42 embed-certs-270206 kubelet[781]: I1120 22:26:42.751068     781 scope.go:117] "RemoveContainer" containerID="36010b6fe9896ccc9a4b1625abe5e841f76764fdd492d50d3386652d73dbd383"
	Nov 20 22:26:42 embed-certs-270206 kubelet[781]: E1120 22:26:42.751777     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:26:42 embed-certs-270206 kubelet[781]: I1120 22:26:42.752919     781 scope.go:117] "RemoveContainer" containerID="21d05a77fd7d33df2240e28d291277fbcae5731f5eef27d2e434153552e9eef2"
	Nov 20 22:26:46 embed-certs-270206 kubelet[781]: I1120 22:26:46.293022     781 scope.go:117] "RemoveContainer" containerID="36010b6fe9896ccc9a4b1625abe5e841f76764fdd492d50d3386652d73dbd383"
	Nov 20 22:26:46 embed-certs-270206 kubelet[781]: E1120 22:26:46.293230     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: I1120 22:26:59.558413     781 scope.go:117] "RemoveContainer" containerID="36010b6fe9896ccc9a4b1625abe5e841f76764fdd492d50d3386652d73dbd383"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: I1120 22:26:59.795134     781 scope.go:117] "RemoveContainer" containerID="36010b6fe9896ccc9a4b1625abe5e841f76764fdd492d50d3386652d73dbd383"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: I1120 22:26:59.795536     781 scope.go:117] "RemoveContainer" containerID="41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: E1120 22:26:59.795727     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:26:59 embed-certs-270206 kubelet[781]: I1120 22:26:59.830257     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8zhp9" podStartSLOduration=15.222341075 podStartE2EDuration="24.829529533s" podCreationTimestamp="2025-11-20 22:26:35 +0000 UTC" firstStartedPulling="2025-11-20 22:26:36.347254671 +0000 UTC m=+10.964724133" lastFinishedPulling="2025-11-20 22:26:45.954443121 +0000 UTC m=+20.571912591" observedRunningTime="2025-11-20 22:26:46.774506333 +0000 UTC m=+21.391975795" watchObservedRunningTime="2025-11-20 22:26:59.829529533 +0000 UTC m=+34.446998994"
	Nov 20 22:27:03 embed-certs-270206 kubelet[781]: I1120 22:27:03.810697     781 scope.go:117] "RemoveContainer" containerID="afa79562f9c94f1b51124ed05b060d5c7eaec4ead64b1bbcceb4670611f5c443"
	Nov 20 22:27:06 embed-certs-270206 kubelet[781]: I1120 22:27:06.292875     781 scope.go:117] "RemoveContainer" containerID="41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f"
	Nov 20 22:27:06 embed-certs-270206 kubelet[781]: E1120 22:27:06.293510     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:27:20 embed-certs-270206 kubelet[781]: I1120 22:27:20.557354     781 scope.go:117] "RemoveContainer" containerID="41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f"
	Nov 20 22:27:20 embed-certs-270206 kubelet[781]: I1120 22:27:20.857801     781 scope.go:117] "RemoveContainer" containerID="41f65186c39cf141b3941ac5384e2a4d4cd08a091f424e0cbcb1691611ead52f"
	Nov 20 22:27:20 embed-certs-270206 kubelet[781]: I1120 22:27:20.858491     781 scope.go:117] "RemoveContainer" containerID="cea1f61272e3fca822e4f102804a4476e9e7b90c8597deb5c2069847084a13ab"
	Nov 20 22:27:20 embed-certs-270206 kubelet[781]: E1120 22:27:20.858894     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7kbn9_kubernetes-dashboard(8aed2c4a-71a6-4192-b4b7-8446916c860b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7kbn9" podUID="8aed2c4a-71a6-4192-b4b7-8446916c860b"
	Nov 20 22:27:22 embed-certs-270206 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:27:22 embed-certs-270206 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:27:22 embed-certs-270206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bf57dfc57e7549a44597fd1849b61c6c486546fa7ec4348a7ed3ac28731fa817] <==
	2025/11/20 22:26:46 Starting overwatch
	2025/11/20 22:26:46 Using namespace: kubernetes-dashboard
	2025/11/20 22:26:46 Using in-cluster config to connect to apiserver
	2025/11/20 22:26:46 Using secret token for csrf signing
	2025/11/20 22:26:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 22:26:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 22:26:46 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 22:26:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 22:26:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 22:26:46 Generating JWE encryption key
	2025/11/20 22:26:46 Initializing JWE encryption key from synchronized object
	2025/11/20 22:26:46 Creating in-cluster Sidecar client
	2025/11/20 22:26:46 Serving insecurely on HTTP port: 9090
	2025/11/20 22:26:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:27:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [644045e039c8edbef28f05e2081e02e88e7668ac9e011777e75d8215f8ad38fa] <==
	I1120 22:27:03.898326       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 22:27:03.921279       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:27:03.921332       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 22:27:03.924089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:07.399458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:11.659553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:15.259753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:18.315174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:21.339202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:21.352652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:27:21.352860       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:27:21.353104       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-270206_dca390e0-431a-446f-a081-98ec10697b9b!
	I1120 22:27:21.358024       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f3cea17-cd64-4701-9269-df7a7dbcb868", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-270206_dca390e0-431a-446f-a081-98ec10697b9b became leader
	W1120 22:27:21.370059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:21.383076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:27:21.458093       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-270206_dca390e0-431a-446f-a081-98ec10697b9b!
	W1120 22:27:23.389824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:23.401692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:25.410296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:25.416416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:27.419687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:27:27.439350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [afa79562f9c94f1b51124ed05b060d5c7eaec4ead64b1bbcceb4670611f5c443] <==
	I1120 22:26:32.836600       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 22:27:02.839970       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-270206 -n embed-certs-270206
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-270206 -n embed-certs-270206: exit status 2 (458.992911ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-270206 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-135623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-135623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (269.281516ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-135623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-135623
helpers_test.go:243: (dbg) docker inspect newest-cni-135623:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084",
	        "Created": "2025-11-20T22:27:40.188334711Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1042726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:27:40.258871831Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/hostname",
	        "HostsPath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/hosts",
	        "LogPath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084-json.log",
	        "Name": "/newest-cni-135623",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-135623:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-135623",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084",
	                "LowerDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-135623",
	                "Source": "/var/lib/docker/volumes/newest-cni-135623/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-135623",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-135623",
	                "name.minikube.sigs.k8s.io": "newest-cni-135623",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9fb8cf01b296313bf966744522d0bba443ae8b85dc91b117aa812fd7ce6a54e0",
	            "SandboxKey": "/var/run/docker/netns/9fb8cf01b296",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34192"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-135623": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:bc:f6:29:f4:02",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "384cacf95f51a5dca0506b04f083a5c52691e66165cd46827abd11d3e9dc7c6a",
	                    "EndpointID": "37cc7482cf62f23a29b3fe1653c3b9873b4b5d13e2203b4c9f0f9c92630b3e90",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-135623",
	                        "22d262387b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-135623 -n newest-cni-135623
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-135623 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-135623 logs -n 25: (1.117245356s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-420078 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:24 UTC │
	│ delete  │ -p old-k8s-version-443192                                                                                                                                                                                                                     │ old-k8s-version-443192       │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:23 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:23 UTC │ 20 Nov 25 22:25 UTC │
	│ delete  │ -p cert-expiration-420078                                                                                                                                                                                                                     │ cert-expiration-420078       │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:24 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-559701 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ stop    │ -p embed-certs-270206 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-270206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:27 UTC │
	│ image   │ default-k8s-diff-port-559701 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p disable-driver-mounts-305138                                                                                                                                                                                                               │ disable-driver-mounts-305138 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	│ image   │ embed-certs-270206 image list --format=json                                                                                                                                                                                                   │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ pause   │ -p embed-certs-270206 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-135623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:27:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:27:34.096233 1042271 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:27:34.096803 1042271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:27:34.096836 1042271 out.go:374] Setting ErrFile to fd 2...
	I1120 22:27:34.096855 1042271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:27:34.097122 1042271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:27:34.097577 1042271 out.go:368] Setting JSON to false
	I1120 22:27:34.099283 1042271 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18579,"bootTime":1763659075,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:27:34.099396 1042271 start.go:143] virtualization:  
	I1120 22:27:34.105237 1042271 out.go:179] * [newest-cni-135623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:27:34.108826 1042271 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:27:34.108893 1042271 notify.go:221] Checking for updates...
	I1120 22:27:34.113163 1042271 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:27:34.119144 1042271 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:27:34.122533 1042271 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:27:34.125760 1042271 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:27:34.129106 1042271 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:27:30.868375 1038356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:27:30.890413 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1120 22:27:30.900349 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1120 22:27:30.900573 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1120 22:27:31.091970 1038356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1120 22:27:31.103814 1038356 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1120 22:27:31.103905 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1120 22:27:31.665757 1038356 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:27:31.675382 1038356 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1120 22:27:31.690424 1038356 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:27:31.704591 1038356 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 22:27:31.730042 1038356 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:27:31.734450 1038356 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:27:31.745773 1038356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:27:31.864425 1038356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:27:31.882393 1038356 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029 for IP: 192.168.85.2
	I1120 22:27:31.882413 1038356 certs.go:195] generating shared ca certs ...
	I1120 22:27:31.882430 1038356 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:31.882628 1038356 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:27:31.882695 1038356 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:27:31.882708 1038356 certs.go:257] generating profile certs ...
	I1120 22:27:31.882788 1038356 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.key
	I1120 22:27:31.882807 1038356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt with IP's: []
	I1120 22:27:32.141828 1038356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt ...
	I1120 22:27:32.141860 1038356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: {Name:mk828f4256503524005008b6c94841d0c7e820ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:32.142059 1038356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.key ...
	I1120 22:27:32.142074 1038356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.key: {Name:mk3a7e16ea06c3e3466e4eed42f1af4a1e8884d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:32.142167 1038356 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key.20ef11a6
	I1120 22:27:32.142183 1038356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.crt.20ef11a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1120 22:27:32.360659 1038356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.crt.20ef11a6 ...
	I1120 22:27:32.360692 1038356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.crt.20ef11a6: {Name:mk84fbd7ae3733b2c06d0bdde79cb0fdddf7e263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:32.360868 1038356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key.20ef11a6 ...
	I1120 22:27:32.360881 1038356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key.20ef11a6: {Name:mk6e6bea62609f96f4bc015ec8ad1d509dee0370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:32.360966 1038356 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.crt.20ef11a6 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.crt
	I1120 22:27:32.361059 1038356 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key.20ef11a6 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key
	I1120 22:27:32.361125 1038356 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.key
	I1120 22:27:32.361144 1038356 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.crt with IP's: []
	I1120 22:27:33.050875 1038356 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.crt ...
	I1120 22:27:33.050953 1038356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.crt: {Name:mk1352c502cf4ec5b83f02b8e7ccdc858f72c8e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:33.051222 1038356 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.key ...
	I1120 22:27:33.051257 1038356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.key: {Name:mk54dafcf517afd2df3c37e1ca6034917d960115 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:33.051539 1038356 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:27:33.051610 1038356 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:27:33.051636 1038356 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:27:33.051711 1038356 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:27:33.051771 1038356 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:27:33.051839 1038356 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:27:33.051967 1038356 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:27:33.052618 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:27:33.071925 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:27:33.090134 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:27:33.107851 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:27:33.125345 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 22:27:33.143400 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:27:33.209077 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:27:33.265196 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:27:33.306931 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:27:33.355449 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:27:33.396877 1038356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:27:33.455654 1038356 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:27:33.478122 1038356 ssh_runner.go:195] Run: openssl version
	I1120 22:27:33.493389 1038356 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:27:33.520398 1038356 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:27:33.541633 1038356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:27:33.546722 1038356 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:27:33.546784 1038356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:27:33.593822 1038356 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:27:33.602717 1038356 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/836852.pem /etc/ssl/certs/51391683.0
	I1120 22:27:33.613808 1038356 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:27:33.625033 1038356 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:27:33.634353 1038356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:27:33.639287 1038356 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:27:33.639358 1038356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:27:33.683180 1038356 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:27:33.692642 1038356 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8368522.pem /etc/ssl/certs/3ec20f2e.0
	I1120 22:27:33.700365 1038356 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:27:33.708126 1038356 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:27:33.717456 1038356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:27:33.723578 1038356 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:27:33.723647 1038356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:27:33.769801 1038356 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:27:33.778251 1038356 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 22:27:33.786719 1038356 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:27:33.791509 1038356 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 22:27:33.791570 1038356 kubeadm.go:401] StartCluster: {Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:27:33.791648 1038356 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:27:33.791705 1038356 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:27:33.826630 1038356 cri.go:89] found id: ""
	I1120 22:27:33.826711 1038356 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:27:33.840757 1038356 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 22:27:33.848771 1038356 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 22:27:33.848833 1038356 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 22:27:33.858557 1038356 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 22:27:33.858575 1038356 kubeadm.go:158] found existing configuration files:
	
	I1120 22:27:33.858626 1038356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 22:27:33.867410 1038356 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 22:27:33.867471 1038356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 22:27:33.875195 1038356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 22:27:33.884756 1038356 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 22:27:33.884816 1038356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 22:27:33.893851 1038356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 22:27:33.904497 1038356 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 22:27:33.904569 1038356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 22:27:33.915167 1038356 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 22:27:33.924735 1038356 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 22:27:33.924803 1038356 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 22:27:33.934038 1038356 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 22:27:33.991536 1038356 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 22:27:33.991900 1038356 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 22:27:34.047704 1038356 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 22:27:34.048110 1038356 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 22:27:34.048161 1038356 kubeadm.go:319] OS: Linux
	I1120 22:27:34.048215 1038356 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 22:27:34.048300 1038356 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 22:27:34.048354 1038356 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 22:27:34.048405 1038356 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 22:27:34.048460 1038356 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 22:27:34.048515 1038356 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 22:27:34.048566 1038356 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 22:27:34.048621 1038356 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 22:27:34.048674 1038356 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 22:27:34.127686 1038356 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 22:27:34.127807 1038356 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 22:27:34.127927 1038356 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 22:27:34.160937 1038356 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 22:27:34.132817 1042271 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:27:34.132965 1042271 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:27:34.161501 1042271 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:27:34.161670 1042271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:27:34.255619 1042271 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-20 22:27:34.240080887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:27:34.255733 1042271 docker.go:319] overlay module found
	I1120 22:27:34.258945 1042271 out.go:179] * Using the docker driver based on user configuration
	I1120 22:27:34.261917 1042271 start.go:309] selected driver: docker
	I1120 22:27:34.261945 1042271 start.go:930] validating driver "docker" against <nil>
	I1120 22:27:34.261959 1042271 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:27:34.262730 1042271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:27:34.340788 1042271 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-20 22:27:34.330791033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:27:34.340948 1042271 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1120 22:27:34.340983 1042271 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1120 22:27:34.341223 1042271 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 22:27:34.344496 1042271 out.go:179] * Using Docker driver with root privileges
	I1120 22:27:34.347461 1042271 cni.go:84] Creating CNI manager for ""
	I1120 22:27:34.347528 1042271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:27:34.347542 1042271 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 22:27:34.347629 1042271 start.go:353] cluster config:
	{Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:27:34.350780 1042271 out.go:179] * Starting "newest-cni-135623" primary control-plane node in "newest-cni-135623" cluster
	I1120 22:27:34.353755 1042271 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:27:34.356722 1042271 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:27:34.359552 1042271 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:27:34.359619 1042271 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:27:34.359634 1042271 cache.go:65] Caching tarball of preloaded images
	I1120 22:27:34.359723 1042271 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:27:34.359738 1042271 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:27:34.359856 1042271 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/config.json ...
	I1120 22:27:34.359880 1042271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/config.json: {Name:mkbef5fbcb6b67c2e19e625b6a4487d9f300c5a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:34.360035 1042271 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:27:34.389658 1042271 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:27:34.389679 1042271 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:27:34.389692 1042271 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:27:34.389714 1042271 start.go:360] acquireMachinesLock for newest-cni-135623: {Name:mk0a4bf77fbaa33e901b00e572e51831d9de02c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:27:34.389823 1042271 start.go:364] duration metric: took 92.67µs to acquireMachinesLock for "newest-cni-135623"
	I1120 22:27:34.389848 1042271 start.go:93] Provisioning new machine with config: &{Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:27:34.389927 1042271 start.go:125] createHost starting for "" (driver="docker")
	I1120 22:27:34.179042 1038356 out.go:252]   - Generating certificates and keys ...
	I1120 22:27:34.179153 1038356 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 22:27:34.179230 1038356 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 22:27:34.295154 1038356 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 22:27:34.393380 1042271 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 22:27:34.393613 1042271 start.go:159] libmachine.API.Create for "newest-cni-135623" (driver="docker")
	I1120 22:27:34.393650 1042271 client.go:173] LocalClient.Create starting
	I1120 22:27:34.393736 1042271 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem
	I1120 22:27:34.393765 1042271 main.go:143] libmachine: Decoding PEM data...
	I1120 22:27:34.393778 1042271 main.go:143] libmachine: Parsing certificate...
	I1120 22:27:34.393829 1042271 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem
	I1120 22:27:34.393850 1042271 main.go:143] libmachine: Decoding PEM data...
	I1120 22:27:34.393860 1042271 main.go:143] libmachine: Parsing certificate...
	I1120 22:27:34.394236 1042271 cli_runner.go:164] Run: docker network inspect newest-cni-135623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 22:27:34.411744 1042271 cli_runner.go:211] docker network inspect newest-cni-135623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 22:27:34.411830 1042271 network_create.go:284] running [docker network inspect newest-cni-135623] to gather additional debugging logs...
	I1120 22:27:34.411847 1042271 cli_runner.go:164] Run: docker network inspect newest-cni-135623
	W1120 22:27:34.441266 1042271 cli_runner.go:211] docker network inspect newest-cni-135623 returned with exit code 1
	I1120 22:27:34.441297 1042271 network_create.go:287] error running [docker network inspect newest-cni-135623]: docker network inspect newest-cni-135623: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-135623 not found
	I1120 22:27:34.441310 1042271 network_create.go:289] output of [docker network inspect newest-cni-135623]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-135623 not found
	
	** /stderr **
	I1120 22:27:34.441402 1042271 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:27:34.460186 1042271 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad232b357b1b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:e5:2b:94:2e:bb} reservation:<nil>}
	I1120 22:27:34.460560 1042271 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6d47b47b5eb7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:61:6b:56:c9:db} reservation:<nil>}
	I1120 22:27:34.460793 1042271 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8999df1e8509 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:04:87:b7:55:e1} reservation:<nil>}
	I1120 22:27:34.461236 1042271 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001974e40}
	I1120 22:27:34.461254 1042271 network_create.go:124] attempt to create docker network newest-cni-135623 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1120 22:27:34.461307 1042271 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-135623 newest-cni-135623
	I1120 22:27:34.533318 1042271 network_create.go:108] docker network newest-cni-135623 192.168.76.0/24 created
	I1120 22:27:34.533348 1042271 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-135623" container
	I1120 22:27:34.533433 1042271 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 22:27:34.552798 1042271 cli_runner.go:164] Run: docker volume create newest-cni-135623 --label name.minikube.sigs.k8s.io=newest-cni-135623 --label created_by.minikube.sigs.k8s.io=true
	I1120 22:27:34.577036 1042271 oci.go:103] Successfully created a docker volume newest-cni-135623
	I1120 22:27:34.577137 1042271 cli_runner.go:164] Run: docker run --rm --name newest-cni-135623-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-135623 --entrypoint /usr/bin/test -v newest-cni-135623:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 22:27:35.350790 1042271 oci.go:107] Successfully prepared a docker volume newest-cni-135623
	I1120 22:27:35.350857 1042271 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:27:35.350867 1042271 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 22:27:35.350941 1042271 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-135623:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 22:27:35.095585 1038356 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 22:27:35.970465 1038356 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 22:27:36.390754 1038356 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 22:27:36.922142 1038356 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 22:27:36.922751 1038356 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-041029] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 22:27:37.984563 1038356 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 22:27:37.985162 1038356 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-041029] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 22:27:38.519983 1038356 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 22:27:39.153359 1038356 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 22:27:39.457803 1038356 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 22:27:39.457991 1038356 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 22:27:40.383736 1038356 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 22:27:41.017751 1038356 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 22:27:42.022718 1038356 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 22:27:42.807411 1038356 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 22:27:43.496484 1038356 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 22:27:43.497109 1038356 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 22:27:43.499759 1038356 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 22:27:40.090917 1042271 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-135623:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.739929148s)
	I1120 22:27:40.090952 1042271 kic.go:203] duration metric: took 4.740080354s to extract preloaded images to volume ...
	W1120 22:27:40.091215 1042271 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 22:27:40.091524 1042271 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 22:27:40.173643 1042271 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-135623 --name newest-cni-135623 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-135623 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-135623 --network newest-cni-135623 --ip 192.168.76.2 --volume newest-cni-135623:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 22:27:40.534391 1042271 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Running}}
	I1120 22:27:40.566939 1042271 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:27:40.593874 1042271 cli_runner.go:164] Run: docker exec newest-cni-135623 stat /var/lib/dpkg/alternatives/iptables
	I1120 22:27:40.654403 1042271 oci.go:144] the created container "newest-cni-135623" has a running status.
	I1120 22:27:40.654434 1042271 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa...
	I1120 22:27:41.657060 1042271 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 22:27:41.676639 1042271 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:27:41.717138 1042271 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 22:27:41.717157 1042271 kic_runner.go:114] Args: [docker exec --privileged newest-cni-135623 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 22:27:41.796773 1042271 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:27:41.815758 1042271 machine.go:94] provisionDockerMachine start ...
	I1120 22:27:41.815847 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:27:41.834135 1042271 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:41.834466 1042271 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1120 22:27:41.834475 1042271 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:27:41.835102 1042271 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38938->127.0.0.1:34192: read: connection reset by peer
	I1120 22:27:43.501106 1038356 out.go:252]   - Booting up control plane ...
	I1120 22:27:43.501207 1038356 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 22:27:43.501289 1038356 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 22:27:43.502038 1038356 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 22:27:43.518565 1038356 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 22:27:43.518679 1038356 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 22:27:43.527010 1038356 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 22:27:43.527638 1038356 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 22:27:43.527710 1038356 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 22:27:43.665934 1038356 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 22:27:43.666061 1038356 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 22:27:44.987101 1042271 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-135623
	
	I1120 22:27:44.987182 1042271 ubuntu.go:182] provisioning hostname "newest-cni-135623"
	I1120 22:27:44.987278 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:27:45.010118 1042271 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:45.010469 1042271 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1120 22:27:45.010482 1042271 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-135623 && echo "newest-cni-135623" | sudo tee /etc/hostname
	I1120 22:27:45.185916 1042271 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-135623
	
	I1120 22:27:45.186114 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:27:45.215667 1042271 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:45.216222 1042271 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1120 22:27:45.216250 1042271 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-135623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-135623/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-135623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:27:45.387068 1042271 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:27:45.387094 1042271 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:27:45.387135 1042271 ubuntu.go:190] setting up certificates
	I1120 22:27:45.387144 1042271 provision.go:84] configureAuth start
	I1120 22:27:45.387206 1042271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:27:45.404502 1042271 provision.go:143] copyHostCerts
	I1120 22:27:45.404571 1042271 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:27:45.404585 1042271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:27:45.404662 1042271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:27:45.404766 1042271 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:27:45.404775 1042271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:27:45.404803 1042271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:27:45.404876 1042271 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:27:45.404884 1042271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:27:45.404908 1042271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:27:45.404968 1042271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.newest-cni-135623 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-135623]
	I1120 22:27:45.727422 1042271 provision.go:177] copyRemoteCerts
	I1120 22:27:45.727527 1042271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:27:45.727571 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:27:45.749487 1042271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:27:45.861946 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:27:45.891495 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 22:27:45.918363 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:27:45.960262 1042271 provision.go:87] duration metric: took 573.08963ms to configureAuth
	I1120 22:27:45.960343 1042271 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:27:45.960605 1042271 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:27:45.960796 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:27:45.988239 1042271 main.go:143] libmachine: Using SSH client type: native
	I1120 22:27:45.988633 1042271 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1120 22:27:45.988672 1042271 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:27:46.366784 1042271 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:27:46.366857 1042271 machine.go:97] duration metric: took 4.551079441s to provisionDockerMachine
	I1120 22:27:46.366882 1042271 client.go:176] duration metric: took 11.973225472s to LocalClient.Create
	I1120 22:27:46.366910 1042271 start.go:167] duration metric: took 11.973297531s to libmachine.API.Create "newest-cni-135623"
	I1120 22:27:46.366948 1042271 start.go:293] postStartSetup for "newest-cni-135623" (driver="docker")
	I1120 22:27:46.367006 1042271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:27:46.367104 1042271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:27:46.367180 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:27:46.400555 1042271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:27:46.536587 1042271 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:27:46.542038 1042271 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:27:46.542075 1042271 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:27:46.542087 1042271 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:27:46.542156 1042271 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:27:46.542257 1042271 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:27:46.542367 1042271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:27:46.555693 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:27:46.584696 1042271 start.go:296] duration metric: took 217.718376ms for postStartSetup
	I1120 22:27:46.585174 1042271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:27:46.621542 1042271 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/config.json ...
	I1120 22:27:46.621903 1042271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:27:46.621963 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:27:46.656633 1042271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:27:46.772802 1042271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:27:46.778571 1042271 start.go:128] duration metric: took 12.388630262s to createHost
	I1120 22:27:46.778592 1042271 start.go:83] releasing machines lock for "newest-cni-135623", held for 12.388761594s
	I1120 22:27:46.778665 1042271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:27:46.804476 1042271 ssh_runner.go:195] Run: cat /version.json
	I1120 22:27:46.804527 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:27:46.804761 1042271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:27:46.804822 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:27:46.843066 1042271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:27:46.860502 1042271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:27:46.979759 1042271 ssh_runner.go:195] Run: systemctl --version
	I1120 22:27:47.093001 1042271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:27:47.162633 1042271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:27:47.168983 1042271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:27:47.169061 1042271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:27:47.207154 1042271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 22:27:47.207188 1042271 start.go:496] detecting cgroup driver to use...
	I1120 22:27:47.207222 1042271 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:27:47.207283 1042271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:27:47.235579 1042271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:27:47.252124 1042271 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:27:47.252192 1042271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:27:47.279431 1042271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:27:47.303144 1042271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:27:47.520329 1042271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:27:47.715185 1042271 docker.go:234] disabling docker service ...
	I1120 22:27:47.715259 1042271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:27:47.753208 1042271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:27:47.777468 1042271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:27:47.989277 1042271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:27:48.251788 1042271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:27:48.284893 1042271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:27:48.317661 1042271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:27:48.317755 1042271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:48.330354 1042271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:27:48.330450 1042271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:48.341969 1042271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:48.353673 1042271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:48.365524 1042271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:27:48.374900 1042271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:48.384781 1042271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:48.399852 1042271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:27:48.409990 1042271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:27:48.419948 1042271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:27:48.437416 1042271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:27:48.636994 1042271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:27:48.874958 1042271 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:27:48.875042 1042271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:27:48.879115 1042271 start.go:564] Will wait 60s for crictl version
	I1120 22:27:48.879231 1042271 ssh_runner.go:195] Run: which crictl
	I1120 22:27:48.887003 1042271 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:27:48.944584 1042271 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:27:48.944779 1042271 ssh_runner.go:195] Run: crio --version
	I1120 22:27:48.998908 1042271 ssh_runner.go:195] Run: crio --version
	I1120 22:27:49.054820 1042271 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:27:49.056720 1042271 cli_runner.go:164] Run: docker network inspect newest-cni-135623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:27:49.078397 1042271 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 22:27:49.083259 1042271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:27:49.101263 1042271 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 22:27:45.666646 1038356 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001005058s
	I1120 22:27:45.671131 1038356 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 22:27:45.671249 1038356 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1120 22:27:45.671367 1038356 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 22:27:45.671464 1038356 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 22:27:49.103685 1042271 kubeadm.go:884] updating cluster {Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:27:49.103836 1042271 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:27:49.103933 1042271 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:27:49.157638 1042271 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:27:49.157661 1042271 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:27:49.157733 1042271 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:27:49.215565 1042271 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:27:49.215649 1042271 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:27:49.215679 1042271 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1120 22:27:49.215816 1042271 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-135623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:27:49.215950 1042271 ssh_runner.go:195] Run: crio config
	I1120 22:27:49.328677 1042271 cni.go:84] Creating CNI manager for ""
	I1120 22:27:49.328700 1042271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:27:49.328721 1042271 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 22:27:49.328744 1042271 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-135623 NodeName:newest-cni-135623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:27:49.328873 1042271 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-135623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:27:49.328952 1042271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:27:49.344167 1042271 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:27:49.344243 1042271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:27:49.357618 1042271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1120 22:27:49.383045 1042271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:27:49.412872 1042271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1120 22:27:49.436509 1042271 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:27:49.441081 1042271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:27:49.462195 1042271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:27:49.701395 1042271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:27:49.734818 1042271 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623 for IP: 192.168.76.2
	I1120 22:27:49.734841 1042271 certs.go:195] generating shared ca certs ...
	I1120 22:27:49.734858 1042271 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:49.735057 1042271 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:27:49.735112 1042271 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:27:49.735126 1042271 certs.go:257] generating profile certs ...
	I1120 22:27:49.735191 1042271 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/client.key
	I1120 22:27:49.735208 1042271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/client.crt with IP's: []
	I1120 22:27:49.868171 1042271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/client.crt ...
	I1120 22:27:49.868204 1042271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/client.crt: {Name:mk7321447420fc7ebce047cddec2fdd2eb5a77c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:49.868399 1042271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/client.key ...
	I1120 22:27:49.868416 1042271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/client.key: {Name:mkc926b16504948ae30e7d9514c85568345bff9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:49.868504 1042271 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key.0fed1dd1
	I1120 22:27:49.868528 1042271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.crt.0fed1dd1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1120 22:27:50.160048 1042271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.crt.0fed1dd1 ...
	I1120 22:27:50.160082 1042271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.crt.0fed1dd1: {Name:mk5faf725bfe8efdd485d529701d4f5f1d426f44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:50.160333 1042271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key.0fed1dd1 ...
	I1120 22:27:50.160350 1042271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key.0fed1dd1: {Name:mk97766184900f4ab790b70a86d2b70f45de0f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:50.160462 1042271 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.crt.0fed1dd1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.crt
	I1120 22:27:50.160549 1042271 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key.0fed1dd1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key
	I1120 22:27:50.160610 1042271 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key
	I1120 22:27:50.160628 1042271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.crt with IP's: []
	I1120 22:27:50.387632 1042271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.crt ...
	I1120 22:27:50.387664 1042271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.crt: {Name:mk69f16a06e0fcbf7b7784fe98b4fa5bf673eeb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:50.387878 1042271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key ...
	I1120 22:27:50.387894 1042271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key: {Name:mk6b1d53400a0f2cc190090d33b76c679cd78e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:27:50.388093 1042271 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:27:50.388137 1042271 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:27:50.388152 1042271 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:27:50.388180 1042271 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:27:50.388214 1042271 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:27:50.388241 1042271 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:27:50.388291 1042271 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:27:50.388855 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:27:50.408198 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:27:50.426090 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:27:50.443920 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:27:50.461073 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 22:27:50.478164 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:27:50.495689 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:27:50.513169 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:27:50.530382 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:27:50.548252 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:27:50.576934 1042271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:27:50.601450 1042271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:27:50.621257 1042271 ssh_runner.go:195] Run: openssl version
	I1120 22:27:50.633573 1042271 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:27:50.646324 1042271 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:27:50.660294 1042271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:27:50.667577 1042271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:27:50.667648 1042271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:27:50.749966 1042271 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:27:50.775296 1042271 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 22:27:50.785524 1042271 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:27:50.802563 1042271 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:27:50.816275 1042271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:27:50.823842 1042271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:27:50.823911 1042271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:27:50.870947 1042271 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:27:50.878898 1042271 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/836852.pem /etc/ssl/certs/51391683.0
	I1120 22:27:50.887080 1042271 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:27:50.894905 1042271 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:27:50.907499 1042271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:27:50.911701 1042271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:27:50.911777 1042271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:27:50.959323 1042271 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:27:50.967458 1042271 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8368522.pem /etc/ssl/certs/3ec20f2e.0
	I1120 22:27:50.975225 1042271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:27:50.983394 1042271 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 22:27:50.983447 1042271 kubeadm.go:401] StartCluster: {Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:27:50.983537 1042271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:27:50.983604 1042271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:27:51.041611 1042271 cri.go:89] found id: ""
	I1120 22:27:51.041695 1042271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:27:51.055915 1042271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 22:27:51.069430 1042271 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 22:27:51.069499 1042271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 22:27:51.080392 1042271 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 22:27:51.080414 1042271 kubeadm.go:158] found existing configuration files:
	
	I1120 22:27:51.080471 1042271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 22:27:51.095365 1042271 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 22:27:51.095443 1042271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 22:27:51.109088 1042271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 22:27:51.123581 1042271 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 22:27:51.123650 1042271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 22:27:51.132533 1042271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 22:27:51.148787 1042271 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 22:27:51.148865 1042271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 22:27:51.162054 1042271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 22:27:51.173033 1042271 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 22:27:51.173099 1042271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 22:27:51.185705 1042271 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 22:27:51.266517 1042271 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 22:27:51.266849 1042271 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 22:27:51.302856 1042271 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 22:27:51.302934 1042271 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 22:27:51.302986 1042271 kubeadm.go:319] OS: Linux
	I1120 22:27:51.303046 1042271 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 22:27:51.303117 1042271 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 22:27:51.303171 1042271 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 22:27:51.303226 1042271 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 22:27:51.303280 1042271 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 22:27:51.303342 1042271 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 22:27:51.303397 1042271 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 22:27:51.303451 1042271 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 22:27:51.303503 1042271 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 22:27:51.403411 1042271 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 22:27:51.403529 1042271 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 22:27:51.403637 1042271 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 22:27:51.422151 1042271 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 22:27:51.427345 1042271 out.go:252]   - Generating certificates and keys ...
	I1120 22:27:51.427439 1042271 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 22:27:51.427517 1042271 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 22:27:52.094498 1042271 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 22:27:52.368822 1042271 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 22:27:52.623377 1042271 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 22:27:53.590373 1042271 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 22:27:54.029031 1042271 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 22:27:54.029335 1042271 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-135623] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 22:27:51.346288 1038356 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.673947459s
	I1120 22:27:54.165843 1038356 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.494683092s
	I1120 22:27:55.678135 1038356 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.006936579s
	I1120 22:27:55.715276 1038356 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 22:27:55.734972 1038356 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 22:27:55.767028 1038356 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 22:27:55.767239 1038356 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-041029 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 22:27:55.782656 1038356 kubeadm.go:319] [bootstrap-token] Using token: et02hh.tfrcpeqq38msm330
	I1120 22:27:55.785693 1038356 out.go:252]   - Configuring RBAC rules ...
	I1120 22:27:55.785843 1038356 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 22:27:55.800826 1038356 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 22:27:55.812630 1038356 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 22:27:55.819826 1038356 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 22:27:55.825907 1038356 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 22:27:55.830716 1038356 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 22:27:56.086673 1038356 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 22:27:56.767142 1038356 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 22:27:57.087335 1038356 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 22:27:57.088462 1038356 kubeadm.go:319] 
	I1120 22:27:57.088543 1038356 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 22:27:57.088550 1038356 kubeadm.go:319] 
	I1120 22:27:57.088639 1038356 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 22:27:57.088645 1038356 kubeadm.go:319] 
	I1120 22:27:57.088671 1038356 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 22:27:57.088733 1038356 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 22:27:57.088799 1038356 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 22:27:57.088805 1038356 kubeadm.go:319] 
	I1120 22:27:57.088861 1038356 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 22:27:57.088866 1038356 kubeadm.go:319] 
	I1120 22:27:57.088931 1038356 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 22:27:57.088937 1038356 kubeadm.go:319] 
	I1120 22:27:57.089007 1038356 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 22:27:57.089100 1038356 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 22:27:57.089171 1038356 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 22:27:57.089176 1038356 kubeadm.go:319] 
	I1120 22:27:57.089263 1038356 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 22:27:57.089347 1038356 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 22:27:57.089352 1038356 kubeadm.go:319] 
	I1120 22:27:57.089447 1038356 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token et02hh.tfrcpeqq38msm330 \
	I1120 22:27:57.089558 1038356 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 \
	I1120 22:27:57.089587 1038356 kubeadm.go:319] 	--control-plane 
	I1120 22:27:57.089592 1038356 kubeadm.go:319] 
	I1120 22:27:57.089686 1038356 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 22:27:57.089691 1038356 kubeadm.go:319] 
	I1120 22:27:57.089775 1038356 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token et02hh.tfrcpeqq38msm330 \
	I1120 22:27:57.090294 1038356 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 
	I1120 22:27:57.095761 1038356 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 22:27:57.096195 1038356 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 22:27:57.096326 1038356 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 22:27:57.096343 1038356 cni.go:84] Creating CNI manager for ""
	I1120 22:27:57.096352 1038356 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:27:57.099466 1038356 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 22:27:54.358308 1042271 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 22:27:54.358625 1042271 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-135623] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 22:27:54.716366 1042271 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 22:27:54.842546 1042271 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 22:27:55.372478 1042271 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 22:27:55.372761 1042271 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 22:27:55.685931 1042271 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 22:27:56.189125 1042271 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 22:27:56.415324 1042271 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 22:27:56.694551 1042271 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 22:27:57.384604 1042271 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 22:27:57.385413 1042271 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 22:27:57.388873 1042271 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 22:27:57.392552 1042271 out.go:252]   - Booting up control plane ...
	I1120 22:27:57.392655 1042271 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 22:27:57.394130 1042271 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 22:27:57.396486 1042271 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 22:27:57.412615 1042271 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 22:27:57.412729 1042271 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 22:27:57.421696 1042271 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 22:27:57.421801 1042271 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 22:27:57.421842 1042271 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 22:27:57.617409 1042271 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 22:27:57.617536 1042271 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 22:27:57.102371 1038356 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 22:27:57.107774 1038356 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 22:27:57.107808 1038356 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 22:27:57.132336 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 22:27:57.575751 1038356 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 22:27:57.575886 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:27:57.575951 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-041029 minikube.k8s.io/updated_at=2025_11_20T22_27_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=no-preload-041029 minikube.k8s.io/primary=true
	I1120 22:27:57.878278 1038356 ops.go:34] apiserver oom_adj: -16
	I1120 22:27:57.878380 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:27:58.378848 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:27:58.879435 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:27:59.378456 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:27:59.879376 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:00.378876 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:00.878482 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:01.378855 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:01.878492 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:02.379153 1038356 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:02.634246 1038356 kubeadm.go:1114] duration metric: took 5.058409041s to wait for elevateKubeSystemPrivileges
	I1120 22:28:02.634273 1038356 kubeadm.go:403] duration metric: took 28.84271217s to StartCluster
	I1120 22:28:02.634290 1038356 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:02.634352 1038356 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:02.635083 1038356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:02.635314 1038356 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:28:02.635409 1038356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 22:28:02.635647 1038356 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:02.635680 1038356 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:28:02.635738 1038356 addons.go:70] Setting storage-provisioner=true in profile "no-preload-041029"
	I1120 22:28:02.635751 1038356 addons.go:239] Setting addon storage-provisioner=true in "no-preload-041029"
	I1120 22:28:02.635770 1038356 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:28:02.636382 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:02.636958 1038356 addons.go:70] Setting default-storageclass=true in profile "no-preload-041029"
	I1120 22:28:02.636978 1038356 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-041029"
	I1120 22:28:02.637255 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:02.639677 1038356 out.go:179] * Verifying Kubernetes components...
	I1120 22:28:02.643519 1038356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:02.673453 1038356 addons.go:239] Setting addon default-storageclass=true in "no-preload-041029"
	I1120 22:28:02.673501 1038356 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:28:02.673912 1038356 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:02.695092 1038356 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:27:59.619840 1042271 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.002718866s
	I1120 22:27:59.624343 1042271 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 22:27:59.624442 1042271 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1120 22:27:59.624536 1042271 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 22:27:59.624618 1042271 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 22:28:02.132456 1042271 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.506789861s
	I1120 22:28:02.698241 1038356 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:02.698263 1038356 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:28:02.698333 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:02.723247 1038356 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:02.723268 1038356 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:28:02.723354 1038356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:02.737182 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:02.760727 1038356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:03.238951 1038356 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:03.280294 1038356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:03.280509 1038356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 22:28:03.326901 1038356 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:04.702058 1038356 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.462999889s)
	I1120 22:28:04.940194 1038356 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.659866878s)
	I1120 22:28:04.940388 1038356 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.659846029s)
	I1120 22:28:04.940460 1038356 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 22:28:04.941949 1038356 node_ready.go:35] waiting up to 6m0s for node "no-preload-041029" to be "Ready" ...
	I1120 22:28:05.410203 1038356 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.083163658s)
	I1120 22:28:05.413511 1038356 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1120 22:28:05.783876 1042271 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.159520233s
	I1120 22:28:08.128197 1042271 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.503782147s
	I1120 22:28:08.151622 1042271 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 22:28:08.170092 1042271 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 22:28:08.193848 1042271 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 22:28:08.194063 1042271 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-135623 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 22:28:08.221291 1042271 kubeadm.go:319] [bootstrap-token] Using token: p8aucc.tptea7jifn5si56v
	I1120 22:28:08.225438 1042271 out.go:252]   - Configuring RBAC rules ...
	I1120 22:28:08.225575 1042271 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 22:28:08.241890 1042271 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 22:28:08.257923 1042271 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 22:28:08.266309 1042271 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 22:28:08.271960 1042271 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 22:28:08.278840 1042271 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 22:28:08.535997 1042271 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 22:28:09.019392 1042271 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 22:28:09.535567 1042271 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 22:28:09.536835 1042271 kubeadm.go:319] 
	I1120 22:28:09.536911 1042271 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 22:28:09.536923 1042271 kubeadm.go:319] 
	I1120 22:28:09.537003 1042271 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 22:28:09.537013 1042271 kubeadm.go:319] 
	I1120 22:28:09.537039 1042271 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 22:28:09.537105 1042271 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 22:28:09.537161 1042271 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 22:28:09.537176 1042271 kubeadm.go:319] 
	I1120 22:28:09.537233 1042271 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 22:28:09.537246 1042271 kubeadm.go:319] 
	I1120 22:28:09.537298 1042271 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 22:28:09.537304 1042271 kubeadm.go:319] 
	I1120 22:28:09.537359 1042271 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 22:28:09.537440 1042271 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 22:28:09.537515 1042271 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 22:28:09.537523 1042271 kubeadm.go:319] 
	I1120 22:28:09.537629 1042271 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 22:28:09.537713 1042271 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 22:28:09.537721 1042271 kubeadm.go:319] 
	I1120 22:28:09.537810 1042271 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token p8aucc.tptea7jifn5si56v \
	I1120 22:28:09.537921 1042271 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 \
	I1120 22:28:09.537946 1042271 kubeadm.go:319] 	--control-plane 
	I1120 22:28:09.537953 1042271 kubeadm.go:319] 
	I1120 22:28:09.538041 1042271 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 22:28:09.538049 1042271 kubeadm.go:319] 
	I1120 22:28:09.538135 1042271 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token p8aucc.tptea7jifn5si56v \
	I1120 22:28:09.538245 1042271 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 
	I1120 22:28:09.542824 1042271 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 22:28:09.543089 1042271 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 22:28:09.543204 1042271 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 22:28:09.543223 1042271 cni.go:84] Creating CNI manager for ""
	I1120 22:28:09.543230 1042271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:09.548399 1042271 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 22:28:05.416323 1038356 addons.go:515] duration metric: took 2.780628129s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1120 22:28:05.450864 1038356 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-041029" context rescaled to 1 replicas
	W1120 22:28:06.945509 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	W1120 22:28:09.445177 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	I1120 22:28:09.551340 1042271 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 22:28:09.555782 1042271 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 22:28:09.555842 1042271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 22:28:09.574887 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 22:28:09.884514 1042271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 22:28:09.884684 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:09.884786 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-135623 minikube.k8s.io/updated_at=2025_11_20T22_28_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=newest-cni-135623 minikube.k8s.io/primary=true
	I1120 22:28:09.896763 1042271 ops.go:34] apiserver oom_adj: -16
	I1120 22:28:10.034487 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:10.535240 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:11.035094 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:11.535492 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:12.035113 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:12.535592 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:13.034821 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:13.534812 1042271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:28:13.645190 1042271 kubeadm.go:1114] duration metric: took 3.760558406s to wait for elevateKubeSystemPrivileges
	I1120 22:28:13.645230 1042271 kubeadm.go:403] duration metric: took 22.661786071s to StartCluster
	I1120 22:28:13.645251 1042271 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:13.645312 1042271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:13.646235 1042271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:13.646471 1042271 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:28:13.646586 1042271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 22:28:13.646839 1042271 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:13.646881 1042271 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:28:13.646942 1042271 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-135623"
	I1120 22:28:13.646966 1042271 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-135623"
	I1120 22:28:13.647021 1042271 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:13.647822 1042271 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:13.648217 1042271 addons.go:70] Setting default-storageclass=true in profile "newest-cni-135623"
	I1120 22:28:13.648239 1042271 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-135623"
	I1120 22:28:13.648494 1042271 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:13.649798 1042271 out.go:179] * Verifying Kubernetes components...
	I1120 22:28:13.653700 1042271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:13.690407 1042271 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:28:13.693576 1042271 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:13.693602 1042271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:28:13.693671 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:13.698685 1042271 addons.go:239] Setting addon default-storageclass=true in "newest-cni-135623"
	I1120 22:28:13.698736 1042271 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:13.699231 1042271 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:13.744812 1042271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:13.746448 1042271 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:13.746474 1042271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:28:13.746533 1042271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:13.781076 1042271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:14.026230 1042271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 22:28:14.031709 1042271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:14.047535 1042271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:14.057350 1042271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1120 22:28:11.445304 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	W1120 22:28:13.945168 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	I1120 22:28:14.852298 1042271 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 22:28:14.854773 1042271 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:28:14.855758 1042271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:28:15.159851 1042271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.102416431s)
	I1120 22:28:15.160083 1042271 api_server.go:72] duration metric: took 1.513578844s to wait for apiserver process to appear ...
	I1120 22:28:15.160097 1042271 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:28:15.160128 1042271 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1120 22:28:15.163061 1042271 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1120 22:28:15.166847 1042271 addons.go:515] duration metric: took 1.51993744s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1120 22:28:15.171926 1042271 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1120 22:28:15.175414 1042271 api_server.go:141] control plane version: v1.34.1
	I1120 22:28:15.175493 1042271 api_server.go:131] duration metric: took 15.386873ms to wait for apiserver health ...
	I1120 22:28:15.175519 1042271 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:28:15.184803 1042271 system_pods.go:59] 8 kube-system pods found
	I1120 22:28:15.184905 1042271 system_pods.go:61] "coredns-66bc5c9577-9flb9" [3dc2f756-6d87-4c6c-a277-f78afd3dee9d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 22:28:15.184933 1042271 system_pods.go:61] "etcd-newest-cni-135623" [0de7f3f2-008e-4d81-9d64-817f1d6baac9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:28:15.184974 1042271 system_pods.go:61] "kindnet-qnvsk" [f7a38583-b1d7-4129-ad46-dd3ccb7319eb] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 22:28:15.184999 1042271 system_pods.go:61] "kube-apiserver-newest-cni-135623" [d04f855f-e0d5-4f66-8479-486e7801a0c8] Running
	I1120 22:28:15.185027 1042271 system_pods.go:61] "kube-controller-manager-newest-cni-135623" [216bbe7c-632b-4b80-bc44-3198afcc3979] Running
	I1120 22:28:15.185065 1042271 system_pods.go:61] "kube-proxy-8cqbf" [0c0b8be5-8252-4341-b19a-5270b86a2b1d] Running
	I1120 22:28:15.185086 1042271 system_pods.go:61] "kube-scheduler-newest-cni-135623" [8d3fed71-fe6a-4425-ad2d-c37cd0c2de1d] Running
	I1120 22:28:15.185109 1042271 system_pods.go:61] "storage-provisioner" [21cbba0f-bc0e-4982-a846-6b4daa0506ba] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 22:28:15.185138 1042271 system_pods.go:74] duration metric: took 9.599771ms to wait for pod list to return data ...
	I1120 22:28:15.185172 1042271 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:28:15.194315 1042271 default_sa.go:45] found service account: "default"
	I1120 22:28:15.194402 1042271 default_sa.go:55] duration metric: took 9.209333ms for default service account to be created ...
	I1120 22:28:15.194430 1042271 kubeadm.go:587] duration metric: took 1.547926534s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 22:28:15.194473 1042271 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:28:15.211554 1042271 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:28:15.211651 1042271 node_conditions.go:123] node cpu capacity is 2
	I1120 22:28:15.211687 1042271 node_conditions.go:105] duration metric: took 17.185361ms to run NodePressure ...
	I1120 22:28:15.211723 1042271 start.go:242] waiting for startup goroutines ...
	I1120 22:28:15.357776 1042271 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-135623" context rescaled to 1 replicas
	I1120 22:28:15.357858 1042271 start.go:247] waiting for cluster config update ...
	I1120 22:28:15.357887 1042271 start.go:256] writing updated cluster config ...
	I1120 22:28:15.358241 1042271 ssh_runner.go:195] Run: rm -f paused
	I1120 22:28:15.462702 1042271 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:28:15.466320 1042271 out.go:179] * Done! kubectl is now configured to use "newest-cni-135623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.564724684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.567479534Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-8cqbf/POD" id=0adebd9c-95ed-4f55-87d2-aab1727d87e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.568165049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.597826944Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6e9c6342-cb5f-4255-8458-486951b7e1ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.625085255Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0adebd9c-95ed-4f55-87d2-aab1727d87e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.639284773Z" level=info msg="Ran pod sandbox ee46095c0e59eac6cdc5c5bc6edae556f07b3de84f60a79f19b164db81434524 with infra container: kube-system/kube-proxy-8cqbf/POD" id=0adebd9c-95ed-4f55-87d2-aab1727d87e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.646146818Z" level=info msg="Ran pod sandbox 54f82bd9519e1e9181a3619d5a62a71b62d325a98a16e552fe36f83c715faa85 with infra container: kube-system/kindnet-qnvsk/POD" id=6e9c6342-cb5f-4255-8458-486951b7e1ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.647635731Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d1b0e802-4304-4170-b3ca-7ab1f43e8994 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.652406632Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c11ca40a-8db1-414c-afcb-56ad04009eec name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.662138088Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2999033e-2bee-4c9f-976a-56278404465f name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.672738774Z" level=info msg="Creating container: kube-system/kube-proxy-8cqbf/kube-proxy" id=59887dfc-1946-4a7c-9aab-6f2a94c62a69 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.672914702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.675094989Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=fa747552-3c0f-4340-9456-6b61eae46e25 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.691882257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.703639454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.707341984Z" level=info msg="Creating container: kube-system/kindnet-qnvsk/kindnet-cni" id=d8efc2e6-85d3-47ee-8989-d768cf53de53 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.707590397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.718114143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.7186053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.743246708Z" level=info msg="Created container 616c6d4262cf72e7c3be4da81c970a4bb20b71d08cb5c66a4e2d14d3a6e76576: kube-system/kube-proxy-8cqbf/kube-proxy" id=59887dfc-1946-4a7c-9aab-6f2a94c62a69 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.747060074Z" level=info msg="Starting container: 616c6d4262cf72e7c3be4da81c970a4bb20b71d08cb5c66a4e2d14d3a6e76576" id=07dc02e3-5df9-4aa3-8b09-0924ca7f94be name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.75075736Z" level=info msg="Started container" PID=1479 containerID=616c6d4262cf72e7c3be4da81c970a4bb20b71d08cb5c66a4e2d14d3a6e76576 description=kube-system/kube-proxy-8cqbf/kube-proxy id=07dc02e3-5df9-4aa3-8b09-0924ca7f94be name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee46095c0e59eac6cdc5c5bc6edae556f07b3de84f60a79f19b164db81434524
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.780520007Z" level=info msg="Created container 00d4c1ed379ef7dc5a5a3d365ab4f2e716c51321c396eaffa6e75fa6f7504464: kube-system/kindnet-qnvsk/kindnet-cni" id=d8efc2e6-85d3-47ee-8989-d768cf53de53 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.781479667Z" level=info msg="Starting container: 00d4c1ed379ef7dc5a5a3d365ab4f2e716c51321c396eaffa6e75fa6f7504464" id=e4af57b5-1ddb-401e-ab72-e29692fc7fe0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:28:14 newest-cni-135623 crio[840]: time="2025-11-20T22:28:14.784190397Z" level=info msg="Started container" PID=1484 containerID=00d4c1ed379ef7dc5a5a3d365ab4f2e716c51321c396eaffa6e75fa6f7504464 description=kube-system/kindnet-qnvsk/kindnet-cni id=e4af57b5-1ddb-401e-ab72-e29692fc7fe0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54f82bd9519e1e9181a3619d5a62a71b62d325a98a16e552fe36f83c715faa85
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	00d4c1ed379ef       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   54f82bd9519e1       kindnet-qnvsk                               kube-system
	616c6d4262cf7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   ee46095c0e59e       kube-proxy-8cqbf                            kube-system
	5a3f0fdbb2b38       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   17 seconds ago      Running             kube-apiserver            0                   287d169325df0       kube-apiserver-newest-cni-135623            kube-system
	e4cf8d32080ba       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   17 seconds ago      Running             etcd                      0                   259ae96fd8d8a       etcd-newest-cni-135623                      kube-system
	db6f76de04e3c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   17 seconds ago      Running             kube-controller-manager   0                   25fee98a4b486       kube-controller-manager-newest-cni-135623   kube-system
	05c230eca514a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   17 seconds ago      Running             kube-scheduler            0                   0c4e2f8784ad4       kube-scheduler-newest-cni-135623            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-135623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-135623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=newest-cni-135623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_28_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:28:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-135623
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:28:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:28:09 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:28:09 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:28:09 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 20 Nov 2025 22:28:09 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-135623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                04e07bd9-c8a6-4d46-86ba-5a3653e3028d
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-135623                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7s
	  kube-system                 kindnet-qnvsk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-135623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-135623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-8cqbf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-135623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-135623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-135623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-135623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                 kubelet          Node newest-cni-135623 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-135623 event: Registered Node newest-cni-135623 in Controller
	
	
	==> dmesg <==
	[Nov20 22:05] overlayfs: idmapped layers are currently not supported
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	[Nov20 22:27] overlayfs: idmapped layers are currently not supported
	[ +44.355242] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e4cf8d32080ba0d520a84a35d8f2f8f5548f3b0754177be17ab5cdb13ab0b9b7] <==
	{"level":"warn","ts":"2025-11-20T22:28:03.411819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.451040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.483315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.523329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.558129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.579877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.606092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.653137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.669138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.710043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.734958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.760481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.817911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.871725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.900617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.925377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.957476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:03.989748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:04.025259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:04.052470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:04.091398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:04.129550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:04.191024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:04.210397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:04.343528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59990","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:28:17 up  5:10,  0 user,  load average: 6.43, 4.11, 3.02
	Linux newest-cni-135623 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [00d4c1ed379ef7dc5a5a3d365ab4f2e716c51321c396eaffa6e75fa6f7504464] <==
	I1120 22:28:14.826970       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:28:14.904980       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 22:28:14.905104       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:28:14.905116       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:28:14.905131       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:28:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:28:15.110051       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:28:15.113823       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:28:15.117391       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:28:15.117797       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [5a3f0fdbb2b3882f07000f6205b6ae60c4d459cfe556f250a7a7ef28da7d90c0] <==
	I1120 22:28:06.019469       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 22:28:06.021384       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 22:28:06.021503       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 22:28:06.021538       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 22:28:06.021872       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:28:06.028431       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 22:28:06.038057       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:28:06.047989       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:28:06.411288       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 22:28:06.420114       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 22:28:06.420142       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:28:07.503160       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:28:07.614050       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:28:07.768833       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 22:28:07.786519       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 22:28:07.788071       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:28:07.793803       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:28:07.816905       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:28:08.985089       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:28:09.017566       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 22:28:09.033941       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 22:28:13.861226       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:28:13.916226       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:28:13.926351       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:28:14.004088       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [db6f76de04e3c38330610768088a6acdec520b32f60c95a7843ebc4107dc7b68] <==
	I1120 22:28:12.859384       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-135623"
	I1120 22:28:12.859436       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 22:28:12.860917       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 22:28:12.860960       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 22:28:12.861668       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1120 22:28:12.863314       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:28:12.863381       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 22:28:12.863431       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 22:28:12.863620       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 22:28:12.864213       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 22:28:12.864532       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 22:28:12.865832       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 22:28:12.865950       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:28:12.867888       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 22:28:12.869995       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 22:28:12.871397       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 22:28:12.876083       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:28:12.887562       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 22:28:12.887632       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 22:28:12.887651       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 22:28:12.887706       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 22:28:12.887735       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 22:28:12.887754       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 22:28:12.887760       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 22:28:12.897212       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-135623" podCIDRs=["10.42.0.0/24"]
	
	
	==> kube-proxy [616c6d4262cf72e7c3be4da81c970a4bb20b71d08cb5c66a4e2d14d3a6e76576] <==
	I1120 22:28:14.822399       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:28:14.955624       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:28:15.070451       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:28:15.070495       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 22:28:15.070575       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:28:15.165602       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:28:15.165728       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:28:15.218940       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:28:15.219401       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:28:15.219580       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:28:15.220969       1 config.go:200] "Starting service config controller"
	I1120 22:28:15.221039       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:28:15.221083       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:28:15.223934       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:28:15.221292       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:28:15.226604       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:28:15.223723       1 config.go:309] "Starting node config controller"
	I1120 22:28:15.226688       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:28:15.226717       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:28:15.321582       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:28:15.326745       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:28:15.326757       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05c230eca514a8bc5dba2cf39205556f7ff11e33ef8e209ed74f183af2e6b460] <==
	E1120 22:28:05.844256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 22:28:05.844338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 22:28:05.844408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 22:28:05.844472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 22:28:05.844538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 22:28:05.844630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 22:28:05.844714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 22:28:05.867483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 22:28:05.883307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 22:28:05.883476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 22:28:05.883548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 22:28:06.735562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 22:28:06.742929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 22:28:06.778338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 22:28:06.851439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 22:28:06.853753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 22:28:06.881190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 22:28:06.916623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 22:28:06.921513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 22:28:06.967924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 22:28:06.996616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 22:28:07.010517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 22:28:07.017952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 22:28:07.359678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1120 22:28:09.756349       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:28:09 newest-cni-135623 kubelet[1301]: I1120 22:28:09.174957    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7db297c0fec497d095b572249bbb38b4-kubeconfig\") pod \"kube-controller-manager-newest-cni-135623\" (UID: \"7db297c0fec497d095b572249bbb38b4\") " pod="kube-system/kube-controller-manager-newest-cni-135623"
	Nov 20 22:28:09 newest-cni-135623 kubelet[1301]: I1120 22:28:09.229624    1301 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-135623"
	Nov 20 22:28:09 newest-cni-135623 kubelet[1301]: I1120 22:28:09.243368    1301 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-135623"
	Nov 20 22:28:09 newest-cni-135623 kubelet[1301]: I1120 22:28:09.243475    1301 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-135623"
	Nov 20 22:28:09 newest-cni-135623 kubelet[1301]: I1120 22:28:09.924297    1301 apiserver.go:52] "Watching apiserver"
	Nov 20 22:28:09 newest-cni-135623 kubelet[1301]: I1120 22:28:09.963253    1301 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 22:28:10 newest-cni-135623 kubelet[1301]: I1120 22:28:10.104712    1301 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-135623"
	Nov 20 22:28:10 newest-cni-135623 kubelet[1301]: E1120 22:28:10.121332    1301 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-135623\" already exists" pod="kube-system/etcd-newest-cni-135623"
	Nov 20 22:28:10 newest-cni-135623 kubelet[1301]: I1120 22:28:10.167997    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-135623" podStartSLOduration=4.167964667 podStartE2EDuration="4.167964667s" podCreationTimestamp="2025-11-20 22:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:28:10.150715034 +0000 UTC m=+1.348039372" watchObservedRunningTime="2025-11-20 22:28:10.167964667 +0000 UTC m=+1.365289005"
	Nov 20 22:28:10 newest-cni-135623 kubelet[1301]: I1120 22:28:10.168129    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-135623" podStartSLOduration=1.168124119 podStartE2EDuration="1.168124119s" podCreationTimestamp="2025-11-20 22:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:28:10.166020806 +0000 UTC m=+1.363345160" watchObservedRunningTime="2025-11-20 22:28:10.168124119 +0000 UTC m=+1.365448457"
	Nov 20 22:28:10 newest-cni-135623 kubelet[1301]: I1120 22:28:10.197344    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-135623" podStartSLOduration=2.197322271 podStartE2EDuration="2.197322271s" podCreationTimestamp="2025-11-20 22:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:28:10.182761411 +0000 UTC m=+1.380085765" watchObservedRunningTime="2025-11-20 22:28:10.197322271 +0000 UTC m=+1.394646650"
	Nov 20 22:28:10 newest-cni-135623 kubelet[1301]: I1120 22:28:10.214046    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-135623" podStartSLOduration=1.21402441 podStartE2EDuration="1.21402441s" podCreationTimestamp="2025-11-20 22:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:28:10.197765755 +0000 UTC m=+1.395090093" watchObservedRunningTime="2025-11-20 22:28:10.21402441 +0000 UTC m=+1.411348756"
	Nov 20 22:28:12 newest-cni-135623 kubelet[1301]: I1120 22:28:12.950927    1301 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 20 22:28:12 newest-cni-135623 kubelet[1301]: I1120 22:28:12.951554    1301 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: I1120 22:28:14.326611    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-cni-cfg\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: I1120 22:28:14.326677    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-xtables-lock\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: I1120 22:28:14.326698    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c0b8be5-8252-4341-b19a-5270b86a2b1d-xtables-lock\") pod \"kube-proxy-8cqbf\" (UID: \"0c0b8be5-8252-4341-b19a-5270b86a2b1d\") " pod="kube-system/kube-proxy-8cqbf"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: I1120 22:28:14.326714    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c0b8be5-8252-4341-b19a-5270b86a2b1d-lib-modules\") pod \"kube-proxy-8cqbf\" (UID: \"0c0b8be5-8252-4341-b19a-5270b86a2b1d\") " pod="kube-system/kube-proxy-8cqbf"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: I1120 22:28:14.326759    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8n47\" (UniqueName: \"kubernetes.io/projected/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-kube-api-access-h8n47\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: I1120 22:28:14.326803    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xzbs\" (UniqueName: \"kubernetes.io/projected/0c0b8be5-8252-4341-b19a-5270b86a2b1d-kube-api-access-9xzbs\") pod \"kube-proxy-8cqbf\" (UID: \"0c0b8be5-8252-4341-b19a-5270b86a2b1d\") " pod="kube-system/kube-proxy-8cqbf"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: I1120 22:28:14.326836    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-lib-modules\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: I1120 22:28:14.326853    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c0b8be5-8252-4341-b19a-5270b86a2b1d-kube-proxy\") pod \"kube-proxy-8cqbf\" (UID: \"0c0b8be5-8252-4341-b19a-5270b86a2b1d\") " pod="kube-system/kube-proxy-8cqbf"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: I1120 22:28:14.473143    1301 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 22:28:14 newest-cni-135623 kubelet[1301]: W1120 22:28:14.633507    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/crio-54f82bd9519e1e9181a3619d5a62a71b62d325a98a16e552fe36f83c715faa85 WatchSource:0}: Error finding container 54f82bd9519e1e9181a3619d5a62a71b62d325a98a16e552fe36f83c715faa85: Status 404 returned error can't find the container with id 54f82bd9519e1e9181a3619d5a62a71b62d325a98a16e552fe36f83c715faa85
	Nov 20 22:28:15 newest-cni-135623 kubelet[1301]: I1120 22:28:15.205287    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8cqbf" podStartSLOduration=1.205178502 podStartE2EDuration="1.205178502s" podCreationTimestamp="2025-11-20 22:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:28:15.17333358 +0000 UTC m=+6.370657918" watchObservedRunningTime="2025-11-20 22:28:15.205178502 +0000 UTC m=+6.402502856"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-135623 -n newest-cni-135623
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-135623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-9flb9 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-135623 describe pod coredns-66bc5c9577-9flb9 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-135623 describe pod coredns-66bc5c9577-9flb9 storage-provisioner: exit status 1 (91.487379ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-9flb9" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-135623 describe pod coredns-66bc5c9577-9flb9 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-041029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-041029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (372.967361ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-041029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-041029 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-041029 describe deploy/metrics-server -n kube-system: exit status 1 (135.148091ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-041029 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-041029
helpers_test.go:243: (dbg) docker inspect no-preload-041029:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71",
	        "Created": "2025-11-20T22:27:06.220478605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1038663,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:27:06.322283475Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/hostname",
	        "HostsPath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/hosts",
	        "LogPath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71-json.log",
	        "Name": "/no-preload-041029",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-041029:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-041029",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71",
	                "LowerDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-041029",
	                "Source": "/var/lib/docker/volumes/no-preload-041029/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-041029",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-041029",
	                "name.minikube.sigs.k8s.io": "no-preload-041029",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b397c85b44800160a483d22d8352abe1a4a97371a81af60d3db60b2c2593a1b9",
	            "SandboxKey": "/var/run/docker/netns/b397c85b4480",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34187"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34188"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34191"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34189"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34190"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-041029": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:94:ce:b3:0a:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d249c184d92c757ccd210aec69d5acdf56f64a6ec2365db3e9108375c30dd5a",
	                    "EndpointID": "3aa0882af55eef058c1762492d07da49128930ab7d9bcfc9dba3fb874003dec9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-041029",
	                        "8049b6a31f79"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041029 -n no-preload-041029
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-041029 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-041029 logs -n 25: (1.727500344s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:24 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-559701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-559701 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ stop    │ -p embed-certs-270206 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-270206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:27 UTC │
	│ image   │ default-k8s-diff-port-559701 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p disable-driver-mounts-305138                                                                                                                                                                                                               │ disable-driver-mounts-305138 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ image   │ embed-certs-270206 image list --format=json                                                                                                                                                                                                   │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ pause   │ -p embed-certs-270206 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-135623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ stop    │ -p newest-cni-135623 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-135623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-041029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:28:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:28:19.608763 1046058 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:28:19.609016 1046058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:19.609046 1046058 out.go:374] Setting ErrFile to fd 2...
	I1120 22:28:19.609064 1046058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:19.609376 1046058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:28:19.609928 1046058 out.go:368] Setting JSON to false
	I1120 22:28:19.611285 1046058 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18625,"bootTime":1763659075,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:28:19.611397 1046058 start.go:143] virtualization:  
	I1120 22:28:19.614494 1046058 out.go:179] * [newest-cni-135623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:28:19.618558 1046058 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:28:19.618754 1046058 notify.go:221] Checking for updates...
	I1120 22:28:19.624547 1046058 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:28:19.627376 1046058 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:19.631107 1046058 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:28:19.634185 1046058 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:28:19.637147 1046058 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:28:19.640555 1046058 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:19.641122 1046058 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:28:19.684060 1046058 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:28:19.684178 1046058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:19.750922 1046058 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:28:19.741777755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:19.751146 1046058 docker.go:319] overlay module found
	I1120 22:28:19.754305 1046058 out.go:179] * Using the docker driver based on existing profile
	I1120 22:28:19.757094 1046058 start.go:309] selected driver: docker
	I1120 22:28:19.757115 1046058 start.go:930] validating driver "docker" against &{Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:19.757220 1046058 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:28:19.757935 1046058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:19.812626 1046058 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:28:19.803819677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:19.812991 1046058 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 22:28:19.813026 1046058 cni.go:84] Creating CNI manager for ""
	I1120 22:28:19.813080 1046058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:19.813118 1046058 start.go:353] cluster config:
	{Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:19.818047 1046058 out.go:179] * Starting "newest-cni-135623" primary control-plane node in "newest-cni-135623" cluster
	I1120 22:28:19.820913 1046058 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:28:19.823836 1046058 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:28:19.826698 1046058 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:19.826751 1046058 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:28:19.826761 1046058 cache.go:65] Caching tarball of preloaded images
	I1120 22:28:19.826788 1046058 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:28:19.826856 1046058 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:28:19.826867 1046058 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:28:19.827009 1046058 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/config.json ...
	I1120 22:28:19.846362 1046058 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:28:19.846385 1046058 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:28:19.846420 1046058 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:28:19.846446 1046058 start.go:360] acquireMachinesLock for newest-cni-135623: {Name:mk0a4bf77fbaa33e901b00e572e51831d9de02c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:19.846515 1046058 start.go:364] duration metric: took 47.221µs to acquireMachinesLock for "newest-cni-135623"
	I1120 22:28:19.846544 1046058 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:28:19.846555 1046058 fix.go:54] fixHost starting: 
	I1120 22:28:19.846863 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:19.863822 1046058 fix.go:112] recreateIfNeeded on newest-cni-135623: state=Stopped err=<nil>
	W1120 22:28:19.863860 1046058 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 22:28:15.948116 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	W1120 22:28:18.445599 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	I1120 22:28:18.947015 1038356 node_ready.go:49] node "no-preload-041029" is "Ready"
	I1120 22:28:18.947044 1038356 node_ready.go:38] duration metric: took 14.004801487s for node "no-preload-041029" to be "Ready" ...
	I1120 22:28:18.947057 1038356 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:28:18.947112 1038356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:28:18.973900 1038356 api_server.go:72] duration metric: took 16.338544725s to wait for apiserver process to appear ...
	I1120 22:28:18.973965 1038356 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:28:18.973994 1038356 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:28:18.990038 1038356 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:28:18.993887 1038356 api_server.go:141] control plane version: v1.34.1
	I1120 22:28:18.993913 1038356 api_server.go:131] duration metric: took 19.939104ms to wait for apiserver health ...
	I1120 22:28:18.993921 1038356 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:28:19.004685 1038356 system_pods.go:59] 8 kube-system pods found
	I1120 22:28:19.004784 1038356 system_pods.go:61] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending
	I1120 22:28:19.004806 1038356 system_pods.go:61] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.004839 1038356 system_pods.go:61] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.004868 1038356 system_pods.go:61] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.004890 1038356 system_pods.go:61] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.004911 1038356 system_pods.go:61] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.004943 1038356 system_pods.go:61] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.004964 1038356 system_pods.go:61] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending
	I1120 22:28:19.004986 1038356 system_pods.go:74] duration metric: took 11.057947ms to wait for pod list to return data ...
	I1120 22:28:19.005008 1038356 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:28:19.009645 1038356 default_sa.go:45] found service account: "default"
	I1120 22:28:19.009670 1038356 default_sa.go:55] duration metric: took 4.640199ms for default service account to be created ...
	I1120 22:28:19.009680 1038356 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:28:19.017280 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:19.017308 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending
	I1120 22:28:19.017314 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.017319 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.017323 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.017326 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.017330 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.017333 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.017346 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:28:19.017366 1038356 retry.go:31] will retry after 288.297903ms: missing components: kube-dns
	I1120 22:28:19.317916 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:19.317956 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:28:19.317963 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.317970 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.317974 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.317979 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.317983 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.317987 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.317995 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:28:19.318009 1038356 retry.go:31] will retry after 387.681454ms: missing components: kube-dns
	I1120 22:28:19.711340 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:19.711374 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:28:19.711382 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.711388 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.711393 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.711398 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.711401 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.711411 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.711417 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:28:19.711431 1038356 retry.go:31] will retry after 439.187632ms: missing components: kube-dns
	I1120 22:28:20.214740 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:20.214772 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Running
	I1120 22:28:20.214778 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:20.214783 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:20.214787 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:20.214792 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:20.214797 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:20.214801 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:20.214804 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Running
	I1120 22:28:20.214811 1038356 system_pods.go:126] duration metric: took 1.205126223s to wait for k8s-apps to be running ...
	I1120 22:28:20.214818 1038356 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:28:20.214872 1038356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:28:20.237045 1038356 system_svc.go:56] duration metric: took 22.216114ms WaitForService to wait for kubelet
	I1120 22:28:20.237071 1038356 kubeadm.go:587] duration metric: took 17.601722336s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:28:20.237090 1038356 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:28:20.249880 1038356 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:28:20.249909 1038356 node_conditions.go:123] node cpu capacity is 2
	I1120 22:28:20.249922 1038356 node_conditions.go:105] duration metric: took 12.825773ms to run NodePressure ...
	I1120 22:28:20.249934 1038356 start.go:242] waiting for startup goroutines ...
	I1120 22:28:20.249942 1038356 start.go:247] waiting for cluster config update ...
	I1120 22:28:20.249952 1038356 start.go:256] writing updated cluster config ...
	I1120 22:28:20.250241 1038356 ssh_runner.go:195] Run: rm -f paused
	I1120 22:28:20.254779 1038356 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:28:20.266794 1038356 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.274702 1038356 pod_ready.go:94] pod "coredns-66bc5c9577-6dbgj" is "Ready"
	I1120 22:28:20.274726 1038356 pod_ready.go:86] duration metric: took 7.908483ms for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.278017 1038356 pod_ready.go:83] waiting for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.285637 1038356 pod_ready.go:94] pod "etcd-no-preload-041029" is "Ready"
	I1120 22:28:20.285660 1038356 pod_ready.go:86] duration metric: took 7.62171ms for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.289274 1038356 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.299565 1038356 pod_ready.go:94] pod "kube-apiserver-no-preload-041029" is "Ready"
	I1120 22:28:20.299634 1038356 pod_ready.go:86] duration metric: took 10.333794ms for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.303953 1038356 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.661205 1038356 pod_ready.go:94] pod "kube-controller-manager-no-preload-041029" is "Ready"
	I1120 22:28:20.661282 1038356 pod_ready.go:86] duration metric: took 357.252156ms for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.860905 1038356 pod_ready.go:83] waiting for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.259972 1038356 pod_ready.go:94] pod "kube-proxy-n78zb" is "Ready"
	I1120 22:28:21.260000 1038356 pod_ready.go:86] duration metric: took 399.071073ms for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.461389 1038356 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.860332 1038356 pod_ready.go:94] pod "kube-scheduler-no-preload-041029" is "Ready"
	I1120 22:28:21.860358 1038356 pod_ready.go:86] duration metric: took 398.939928ms for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.860370 1038356 pod_ready.go:40] duration metric: took 1.605560127s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:28:21.916256 1038356 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:28:21.919813 1038356 out.go:179] * Done! kubectl is now configured to use "no-preload-041029" cluster and "default" namespace by default
	I1120 22:28:19.867117 1046058 out.go:252] * Restarting existing docker container for "newest-cni-135623" ...
	I1120 22:28:19.867221 1046058 cli_runner.go:164] Run: docker start newest-cni-135623
	I1120 22:28:20.167549 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:20.194360 1046058 kic.go:430] container "newest-cni-135623" state is running.
	I1120 22:28:20.194747 1046058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:28:20.231080 1046058 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/config.json ...
	I1120 22:28:20.231352 1046058 machine.go:94] provisionDockerMachine start ...
	I1120 22:28:20.231417 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:20.264515 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:20.269131 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:20.269155 1046058 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:28:20.270246 1046058 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:28:23.414799 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-135623
	
	I1120 22:28:23.414831 1046058 ubuntu.go:182] provisioning hostname "newest-cni-135623"
	I1120 22:28:23.414897 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:23.433748 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:23.434079 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:23.434094 1046058 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-135623 && echo "newest-cni-135623" | sudo tee /etc/hostname
	I1120 22:28:23.601694 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-135623
	
	I1120 22:28:23.601827 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:23.621179 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:23.621492 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:23.621514 1046058 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-135623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-135623/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-135623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:28:23.775228 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:28:23.775255 1046058 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:28:23.775302 1046058 ubuntu.go:190] setting up certificates
	I1120 22:28:23.775316 1046058 provision.go:84] configureAuth start
	I1120 22:28:23.775412 1046058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:28:23.792924 1046058 provision.go:143] copyHostCerts
	I1120 22:28:23.792997 1046058 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:28:23.793017 1046058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:28:23.793095 1046058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:28:23.793212 1046058 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:28:23.793226 1046058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:28:23.793255 1046058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:28:23.793312 1046058 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:28:23.793322 1046058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:28:23.793347 1046058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:28:23.793400 1046058 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.newest-cni-135623 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-135623]
	I1120 22:28:24.175067 1046058 provision.go:177] copyRemoteCerts
	I1120 22:28:24.175135 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:28:24.175185 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.195224 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:24.300104 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:28:24.321466 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 22:28:24.348586 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 22:28:24.371336 1046058 provision.go:87] duration metric: took 595.971597ms to configureAuth
	I1120 22:28:24.371364 1046058 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:28:24.371566 1046058 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:24.371675 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.391446 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:24.391762 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:24.391782 1046058 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:28:24.739459 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:28:24.739483 1046058 machine.go:97] duration metric: took 4.508119608s to provisionDockerMachine
	I1120 22:28:24.739495 1046058 start.go:293] postStartSetup for "newest-cni-135623" (driver="docker")
	I1120 22:28:24.739506 1046058 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:28:24.739587 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:28:24.739641 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.756979 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:24.860012 1046058 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:28:24.863669 1046058 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:28:24.863700 1046058 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:28:24.863712 1046058 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:28:24.863777 1046058 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:28:24.863878 1046058 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:28:24.863998 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:28:24.871985 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:24.890430 1046058 start.go:296] duration metric: took 150.918846ms for postStartSetup
	I1120 22:28:24.890571 1046058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:28:24.890616 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.908123 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:25.013420 1046058 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:28:25.019768 1046058 fix.go:56] duration metric: took 5.17320429s for fixHost
	I1120 22:28:25.019805 1046058 start.go:83] releasing machines lock for "newest-cni-135623", held for 5.173274428s
	I1120 22:28:25.019883 1046058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:28:25.040360 1046058 ssh_runner.go:195] Run: cat /version.json
	I1120 22:28:25.040420 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:25.040476 1046058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:28:25.040614 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:25.064095 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:25.071097 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:25.166635 1046058 ssh_runner.go:195] Run: systemctl --version
	I1120 22:28:25.263474 1046058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:28:25.301612 1046058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:28:25.305732 1046058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:28:25.305810 1046058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:28:25.313475 1046058 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:28:25.313550 1046058 start.go:496] detecting cgroup driver to use...
	I1120 22:28:25.313597 1046058 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:28:25.313651 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:28:25.328863 1046058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:28:25.342166 1046058 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:28:25.342229 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:28:25.358110 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:28:25.371853 1046058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:28:25.487091 1046058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:28:25.613512 1046058 docker.go:234] disabling docker service ...
	I1120 22:28:25.613595 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:28:25.630096 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:28:25.645594 1046058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:28:25.776246 1046058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:28:25.888693 1046058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:28:25.901960 1046058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:28:25.917255 1046058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:28:25.917377 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.927084 1046058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:28:25.927198 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.936187 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.944988 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.953615 1046058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:28:25.961745 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.971413 1046058 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.980044 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.988745 1046058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:28:25.996452 1046058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:28:26.004915 1046058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:26.122045 1046058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:28:26.307050 1046058 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:28:26.307196 1046058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:28:26.311586 1046058 start.go:564] Will wait 60s for crictl version
	I1120 22:28:26.311707 1046058 ssh_runner.go:195] Run: which crictl
	I1120 22:28:26.315838 1046058 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:28:26.343825 1046058 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:28:26.344002 1046058 ssh_runner.go:195] Run: crio --version
	I1120 22:28:26.372720 1046058 ssh_runner.go:195] Run: crio --version
	I1120 22:28:26.405777 1046058 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:28:26.408743 1046058 cli_runner.go:164] Run: docker network inspect newest-cni-135623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:28:26.425613 1046058 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 22:28:26.429809 1046058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:26.443060 1046058 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 22:28:26.445993 1046058 kubeadm.go:884] updating cluster {Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:28:26.446166 1046058 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:26.446252 1046058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:26.484434 1046058 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:26.484459 1046058 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:28:26.484521 1046058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:26.510217 1046058 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:26.510243 1046058 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:28:26.510251 1046058 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1120 22:28:26.510396 1046058 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-135623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:28:26.510527 1046058 ssh_runner.go:195] Run: crio config
	I1120 22:28:26.590324 1046058 cni.go:84] Creating CNI manager for ""
	I1120 22:28:26.590350 1046058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:26.590372 1046058 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 22:28:26.592701 1046058 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-135623 NodeName:newest-cni-135623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:28:26.592862 1046058 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-135623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:28:26.592938 1046058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:28:26.608056 1046058 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:28:26.608135 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:28:26.616237 1046058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1120 22:28:26.629637 1046058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:28:26.642733 1046058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1120 22:28:26.655998 1046058 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:28:26.659708 1046058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:26.677753 1046058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:26.801819 1046058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:26.819744 1046058 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623 for IP: 192.168.76.2
	I1120 22:28:26.819766 1046058 certs.go:195] generating shared ca certs ...
	I1120 22:28:26.819783 1046058 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:26.819916 1046058 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:28:26.819968 1046058 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:28:26.819981 1046058 certs.go:257] generating profile certs ...
	I1120 22:28:26.820068 1046058 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/client.key
	I1120 22:28:26.820138 1046058 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key.0fed1dd1
	I1120 22:28:26.820212 1046058 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key
	I1120 22:28:26.820326 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:28:26.820361 1046058 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:28:26.820373 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:28:26.820398 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:28:26.820424 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:28:26.820447 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:28:26.820499 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:26.821136 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:28:26.845858 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:28:26.866347 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:28:26.890289 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:28:26.915043 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 22:28:26.948865 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:28:26.989139 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:28:27.013019 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:28:27.042486 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:28:27.063401 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:28:27.083564 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:28:27.102595 1046058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:28:27.115311 1046058 ssh_runner.go:195] Run: openssl version
	I1120 22:28:27.122767 1046058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.133554 1046058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:28:27.142802 1046058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.147578 1046058 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.147652 1046058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.190059 1046058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:28:27.198365 1046058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.205986 1046058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:28:27.213966 1046058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.217915 1046058 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.217991 1046058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.260086 1046058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:28:27.268008 1046058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.275695 1046058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:28:27.283799 1046058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.287828 1046058 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.287937 1046058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.329873 1046058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:28:27.337431 1046058 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:28:27.341283 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:28:27.382524 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:28:27.424356 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:28:27.475683 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:28:27.527122 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:28:27.595186 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:28:27.659874 1046058 kubeadm.go:401] StartCluster: {Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:27.660023 1046058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:28:27.660125 1046058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:28:27.731473 1046058 cri.go:89] found id: "994060783e1c97d7c1c09724f225c297f94952fd74555ef5c60df0c2669377d3"
	I1120 22:28:27.731540 1046058 cri.go:89] found id: "059409635a2cb5c5a2351453976d3a7badf182fd048d97402160335d0f15c448"
	I1120 22:28:27.731559 1046058 cri.go:89] found id: "c4c11b2d5f9de615c1362209a3d4e356df8a02d81b014351af5ee3d564d65f59"
	I1120 22:28:27.731580 1046058 cri.go:89] found id: ""
	I1120 22:28:27.731684 1046058 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:28:27.759544 1046058 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:27Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:28:27.759694 1046058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:28:27.776624 1046058 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:28:27.776687 1046058 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:28:27.776793 1046058 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:28:27.790113 1046058 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:28:27.790746 1046058 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-135623" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:27.791071 1046058 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-135623" cluster setting kubeconfig missing "newest-cni-135623" context setting]
	I1120 22:28:27.791595 1046058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:27.793271 1046058 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:28:27.803864 1046058 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1120 22:28:27.803937 1046058 kubeadm.go:602] duration metric: took 27.22293ms to restartPrimaryControlPlane
	I1120 22:28:27.803960 1046058 kubeadm.go:403] duration metric: took 144.09676ms to StartCluster
	I1120 22:28:27.804005 1046058 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:27.804084 1046058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:27.805018 1046058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:27.805290 1046058 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:28:27.805671 1046058 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:28:27.805740 1046058 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-135623"
	I1120 22:28:27.805754 1046058 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-135623"
	W1120 22:28:27.805760 1046058 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:28:27.805781 1046058 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:27.806246 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.806640 1046058 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:27.806715 1046058 addons.go:70] Setting dashboard=true in profile "newest-cni-135623"
	I1120 22:28:27.806754 1046058 addons.go:239] Setting addon dashboard=true in "newest-cni-135623"
	W1120 22:28:27.806779 1046058 addons.go:248] addon dashboard should already be in state true
	I1120 22:28:27.806816 1046058 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:27.807269 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.809253 1046058 addons.go:70] Setting default-storageclass=true in profile "newest-cni-135623"
	I1120 22:28:27.809286 1046058 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-135623"
	I1120 22:28:27.809631 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.817886 1046058 out.go:179] * Verifying Kubernetes components...
	I1120 22:28:27.821328 1046058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:27.868711 1046058 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:28:27.868809 1046058 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:28:27.872790 1046058 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:28:27.872909 1046058 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:27.872920 1046058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:28:27.872988 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:27.875945 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:28:27.875972 1046058 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:28:27.876044 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:27.877435 1046058 addons.go:239] Setting addon default-storageclass=true in "newest-cni-135623"
	W1120 22:28:27.877465 1046058 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:28:27.877492 1046058 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:27.877947 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.921797 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:27.940671 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:27.947918 1046058 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:27.947941 1046058 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:28:27.948007 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:27.980681 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:28.169240 1046058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:28.211836 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:28:28.211859 1046058 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:28:28.212433 1046058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:28.244789 1046058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:28.316905 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:28:28.316932 1046058 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:28:28.383041 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:28:28.383070 1046058 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:28:28.476984 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:28:28.477008 1046058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:28:28.504663 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:28:28.504709 1046058 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:28:28.527249 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:28:28.527276 1046058 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:28:28.548629 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:28:28.548669 1046058 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:28:28.569841 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:28:28.569869 1046058 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:28:28.588156 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:28:28.588203 1046058 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:28:28.611754 1046058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 20 22:28:19 no-preload-041029 crio[839]: time="2025-11-20T22:28:19.423722362Z" level=info msg="Created container c54dbb2ca58045c2e20ce808569f551426915900677f16ea79cc4246e4024a93: kube-system/coredns-66bc5c9577-6dbgj/coredns" id=026e0f6c-9180-460e-a17b-47546a0ace42 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:19 no-preload-041029 crio[839]: time="2025-11-20T22:28:19.426963659Z" level=info msg="Starting container: c54dbb2ca58045c2e20ce808569f551426915900677f16ea79cc4246e4024a93" id=180a51ca-e2b0-45a2-b1c4-903070c8442a name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:28:19 no-preload-041029 crio[839]: time="2025-11-20T22:28:19.432624441Z" level=info msg="Started container" PID=2518 containerID=c54dbb2ca58045c2e20ce808569f551426915900677f16ea79cc4246e4024a93 description=kube-system/coredns-66bc5c9577-6dbgj/coredns id=180a51ca-e2b0-45a2-b1c4-903070c8442a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d120e8115f395a282fa3e549fe46b2c9479b9c24893d5e1eea8854074ecf9575
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.431383378Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bde0a22b-5969-4995-b46b-f692048dc78e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.431466152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.436713949Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:63bee0e07964addcd5031d619e153d02d7b8115290cb8c1c2ec68ffe2bff9de4 UID:d5c2a308-e94e-47c2-ae54-0a65575a7220 NetNS:/var/run/netns/b46e073c-8cca-4f8e-9a84-179f2781e7eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000794b8}] Aliases:map[]}"
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.436753769Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.446842604Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:63bee0e07964addcd5031d619e153d02d7b8115290cb8c1c2ec68ffe2bff9de4 UID:d5c2a308-e94e-47c2-ae54-0a65575a7220 NetNS:/var/run/netns/b46e073c-8cca-4f8e-9a84-179f2781e7eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000794b8}] Aliases:map[]}"
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.447447117Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.451236097Z" level=info msg="Ran pod sandbox 63bee0e07964addcd5031d619e153d02d7b8115290cb8c1c2ec68ffe2bff9de4 with infra container: default/busybox/POD" id=bde0a22b-5969-4995-b46b-f692048dc78e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.452321928Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=15ba7f83-d99d-4b68-a556-56d33cce8e5a name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.452455403Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=15ba7f83-d99d-4b68-a556-56d33cce8e5a name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.4525031Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=15ba7f83-d99d-4b68-a556-56d33cce8e5a name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.453469094Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=88c26adb-044c-4112-8ad2-27f1c73c85b4 name=/runtime.v1.ImageService/PullImage
	Nov 20 22:28:22 no-preload-041029 crio[839]: time="2025-11-20T22:28:22.454826485Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.417415159Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=88c26adb-044c-4112-8ad2-27f1c73c85b4 name=/runtime.v1.ImageService/PullImage
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.418285841Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9438f3eb-438c-4508-98d9-46bf7db10a56 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.4200617Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29966b79-0ece-4147-ad47-1912fca15f91 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.429854277Z" level=info msg="Creating container: default/busybox/busybox" id=4920cb22-16be-49b9-aa3a-cf6b0b3bfcc4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.430111683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.435603897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.436215261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.460580267Z" level=info msg="Created container 7a854a122941f5d09007f5be47d051e6f3c071f44be69bad606f8859521a45a7: default/busybox/busybox" id=4920cb22-16be-49b9-aa3a-cf6b0b3bfcc4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.464016439Z" level=info msg="Starting container: 7a854a122941f5d09007f5be47d051e6f3c071f44be69bad606f8859521a45a7" id=e5a43c63-6d4f-4861-b937-d447029ee08f name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:28:24 no-preload-041029 crio[839]: time="2025-11-20T22:28:24.468254122Z" level=info msg="Started container" PID=2571 containerID=7a854a122941f5d09007f5be47d051e6f3c071f44be69bad606f8859521a45a7 description=default/busybox/busybox id=e5a43c63-6d4f-4861-b937-d447029ee08f name=/runtime.v1.RuntimeService/StartContainer sandboxID=63bee0e07964addcd5031d619e153d02d7b8115290cb8c1c2ec68ffe2bff9de4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7a854a122941f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   63bee0e07964a       busybox                                     default
	c54dbb2ca5804       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   d120e8115f395       coredns-66bc5c9577-6dbgj                    kube-system
	19f5daf878154       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   92fe888240c9c       storage-provisioner                         kube-system
	b50d178782b7a       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   c76b9bebe670c       kindnet-2fs8p                               kube-system
	b8eba2bd50bd8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   f5dbcf0c35b55       kube-proxy-n78zb                            kube-system
	674003a4cd4e9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      46 seconds ago      Running             kube-scheduler            0                   063be998bf18a       kube-scheduler-no-preload-041029            kube-system
	d05494cab740a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      46 seconds ago      Running             kube-controller-manager   0                   6c7be6e322abe       kube-controller-manager-no-preload-041029   kube-system
	e8f291e1398bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      46 seconds ago      Running             kube-apiserver            0                   73aba82c1689b       kube-apiserver-no-preload-041029            kube-system
	38512352a911c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      46 seconds ago      Running             etcd                      0                   1b05a2dd1390b       etcd-no-preload-041029                      kube-system
	
	
	==> coredns [c54dbb2ca58045c2e20ce808569f551426915900677f16ea79cc4246e4024a93] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60970 - 19426 "HINFO IN 6244597615192265987.8363221194790154284. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020123237s
	
	
	==> describe nodes <==
	Name:               no-preload-041029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-041029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-041029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_27_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:27:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-041029
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:28:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:28:28 +0000   Thu, 20 Nov 2025 22:27:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:28:28 +0000   Thu, 20 Nov 2025 22:27:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:28:28 +0000   Thu, 20 Nov 2025 22:27:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:28:28 +0000   Thu, 20 Nov 2025 22:28:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-041029
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                c8a9cfc0-4549-4e9b-8f8a-328559b1944e
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-6dbgj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-041029                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-2fs8p                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-041029             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-041029    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-n78zb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-041029             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-041029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-041029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node no-preload-041029 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-041029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-041029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-041029 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-041029 event: Registered Node no-preload-041029 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-041029 status is now: NodeReady
	
	
	==> dmesg <==
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	[Nov20 22:27] overlayfs: idmapped layers are currently not supported
	[ +44.355242] overlayfs: idmapped layers are currently not supported
	[Nov20 22:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [38512352a911c9c80e97fa67b2635efc0868b2d8d2d57c2657d509a7a5ccad55] <==
	{"level":"warn","ts":"2025-11-20T22:27:50.862679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:50.905155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:50.953126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:50.977721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.056069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.103183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.181795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.218355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.249989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.304661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.374458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.389029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.409194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.457875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.484542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.521445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.543356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.576158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.607063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.631831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.663671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.699725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.717055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.744740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:27:51.884553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58302","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:28:32 up  5:10,  0 user,  load average: 5.85, 4.09, 3.03
	Linux no-preload-041029 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b50d178782b7a6bb6ad642cca53a76dfd87330af63edbe5644ef974b0c642d68] <==
	I1120 22:28:08.313980       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:28:08.403163       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:28:08.403314       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:28:08.403339       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:28:08.403354       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:28:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:28:08.609963       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:28:08.609988       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:28:08.609996       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:28:08.610736       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 22:28:08.810080       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:28:08.810186       1 metrics.go:72] Registering metrics
	I1120 22:28:08.810266       1 controller.go:711] "Syncing nftables rules"
	I1120 22:28:18.617984       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:28:18.618034       1 main.go:301] handling current node
	I1120 22:28:28.609617       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:28:28.609653       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e8f291e1398bd63c9f4158eab0b126b8415c3c6ddec1cf01a855adac5fbddd0c] <==
	I1120 22:27:53.355188       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 22:27:53.355850       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 22:27:53.394485       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:27:53.399945       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 22:27:53.406231       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:27:53.420424       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:27:53.549853       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:27:54.007029       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 22:27:54.057663       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 22:27:54.057738       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:27:54.971712       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:27:55.124348       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:27:55.202781       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 22:27:55.210863       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 22:27:55.212314       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:27:55.220565       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:27:56.122580       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:27:56.708373       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:27:56.765232       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 22:27:56.781392       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 22:28:01.914538       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:28:01.928422       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:28:02.138942       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 22:28:02.218533       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1120 22:28:30.284034       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:57096: use of closed network connection
	
	
	==> kube-controller-manager [d05494cab740aef9f566ac8c7ad0aa43ad81b9c60ebc2514e4b579943f901433] <==
	I1120 22:28:01.255705       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 22:28:01.255722       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 22:28:01.256026       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 22:28:01.256229       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 22:28:01.256679       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 22:28:01.263740       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:28:01.283453       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 22:28:01.295308       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-041029" podCIDRs=["10.244.0.0/24"]
	I1120 22:28:01.307414       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:28:01.307556       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1120 22:28:01.307573       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:28:01.307584       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 22:28:01.307608       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 22:28:01.307618       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 22:28:01.307625       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 22:28:01.307636       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 22:28:01.309497       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 22:28:01.309516       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:28:01.309525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 22:28:01.309562       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 22:28:01.371078       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:28:01.371182       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:28:01.371217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:28:01.394940       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:28:21.235478       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b8eba2bd50bd8a28b7b8a1d1b10cb5bc09184306fa3729e9d07fa1cc5b3ec6d9] <==
	I1120 22:28:03.265812       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:28:03.390493       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:28:03.492791       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:28:03.492825       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:28:03.492901       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:28:03.596672       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:28:03.596743       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:28:03.611613       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:28:03.611956       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:28:03.611972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:28:03.624847       1 config.go:200] "Starting service config controller"
	I1120 22:28:03.624867       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:28:03.624885       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:28:03.624890       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:28:03.624902       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:28:03.624908       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:28:03.626072       1 config.go:309] "Starting node config controller"
	I1120 22:28:03.626083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:28:03.626089       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:28:03.728372       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:28:03.728411       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:28:03.728467       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [674003a4cd4e920deddd10127205a10027637fa588dab3b0f8c309191a4e7790] <==
	I1120 22:27:54.122801       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:27:54.122911       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:27:54.122962       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:27:54.123000       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 22:27:54.146246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 22:27:54.160631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 22:27:54.160798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 22:27:54.161140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 22:27:54.161249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 22:27:54.161357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 22:27:54.161444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 22:27:54.161532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 22:27:54.161617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 22:27:54.161700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 22:27:54.161797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 22:27:54.162005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 22:27:54.162100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 22:27:54.162185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 22:27:54.162294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 22:27:54.164673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 22:27:54.164793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 22:27:54.164897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 22:27:54.164957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 22:27:55.098256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1120 22:27:58.223670       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:27:57 no-preload-041029 kubelet[2025]: E1120 22:27:57.918847    2025 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-041029\" already exists" pod="kube-system/kube-apiserver-no-preload-041029"
	Nov 20 22:27:57 no-preload-041029 kubelet[2025]: E1120 22:27:57.931167    2025 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-no-preload-041029\" already exists" pod="kube-system/etcd-no-preload-041029"
	Nov 20 22:28:01 no-preload-041029 kubelet[2025]: I1120 22:28:01.343814    2025 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 22:28:01 no-preload-041029 kubelet[2025]: I1120 22:28:01.345082    2025 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: I1120 22:28:02.378297    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2d930946-643e-4c53-84fc-d1f2bc7882f3-cni-cfg\") pod \"kindnet-2fs8p\" (UID: \"2d930946-643e-4c53-84fc-d1f2bc7882f3\") " pod="kube-system/kindnet-2fs8p"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: I1120 22:28:02.378824    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3bbf58f-77ab-4e32-b0df-64ae33d7674d-xtables-lock\") pod \"kube-proxy-n78zb\" (UID: \"f3bbf58f-77ab-4e32-b0df-64ae33d7674d\") " pod="kube-system/kube-proxy-n78zb"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: I1120 22:28:02.378935    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3bbf58f-77ab-4e32-b0df-64ae33d7674d-lib-modules\") pod \"kube-proxy-n78zb\" (UID: \"f3bbf58f-77ab-4e32-b0df-64ae33d7674d\") " pod="kube-system/kube-proxy-n78zb"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: I1120 22:28:02.379066    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn79m\" (UniqueName: \"kubernetes.io/projected/f3bbf58f-77ab-4e32-b0df-64ae33d7674d-kube-api-access-mn79m\") pod \"kube-proxy-n78zb\" (UID: \"f3bbf58f-77ab-4e32-b0df-64ae33d7674d\") " pod="kube-system/kube-proxy-n78zb"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: I1120 22:28:02.379164    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d930946-643e-4c53-84fc-d1f2bc7882f3-lib-modules\") pod \"kindnet-2fs8p\" (UID: \"2d930946-643e-4c53-84fc-d1f2bc7882f3\") " pod="kube-system/kindnet-2fs8p"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: I1120 22:28:02.379256    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3bbf58f-77ab-4e32-b0df-64ae33d7674d-kube-proxy\") pod \"kube-proxy-n78zb\" (UID: \"f3bbf58f-77ab-4e32-b0df-64ae33d7674d\") " pod="kube-system/kube-proxy-n78zb"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: I1120 22:28:02.379368    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d930946-643e-4c53-84fc-d1f2bc7882f3-xtables-lock\") pod \"kindnet-2fs8p\" (UID: \"2d930946-643e-4c53-84fc-d1f2bc7882f3\") " pod="kube-system/kindnet-2fs8p"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: I1120 22:28:02.379460    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7x9x\" (UniqueName: \"kubernetes.io/projected/2d930946-643e-4c53-84fc-d1f2bc7882f3-kube-api-access-q7x9x\") pod \"kindnet-2fs8p\" (UID: \"2d930946-643e-4c53-84fc-d1f2bc7882f3\") " pod="kube-system/kindnet-2fs8p"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: I1120 22:28:02.561656    2025 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 22:28:02 no-preload-041029 kubelet[2025]: W1120 22:28:02.689394    2025 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/crio-f5dbcf0c35b5536debaf992d1375d245837faac1e639a14f354ff5fe0e9934fe WatchSource:0}: Error finding container f5dbcf0c35b5536debaf992d1375d245837faac1e639a14f354ff5fe0e9934fe: Status 404 returned error can't find the container with id f5dbcf0c35b5536debaf992d1375d245837faac1e639a14f354ff5fe0e9934fe
	Nov 20 22:28:03 no-preload-041029 kubelet[2025]: I1120 22:28:03.981526    2025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n78zb" podStartSLOduration=1.981510477 podStartE2EDuration="1.981510477s" podCreationTimestamp="2025-11-20 22:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:28:03.981152712 +0000 UTC m=+7.447500759" watchObservedRunningTime="2025-11-20 22:28:03.981510477 +0000 UTC m=+7.447858524"
	Nov 20 22:28:18 no-preload-041029 kubelet[2025]: I1120 22:28:18.920050    2025 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 22:28:18 no-preload-041029 kubelet[2025]: I1120 22:28:18.977291    2025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2fs8p" podStartSLOduration=11.553800993 podStartE2EDuration="16.977273818s" podCreationTimestamp="2025-11-20 22:28:02 +0000 UTC" firstStartedPulling="2025-11-20 22:28:02.779075241 +0000 UTC m=+6.245423288" lastFinishedPulling="2025-11-20 22:28:08.202548066 +0000 UTC m=+11.668896113" observedRunningTime="2025-11-20 22:28:08.991124345 +0000 UTC m=+12.457472400" watchObservedRunningTime="2025-11-20 22:28:18.977273818 +0000 UTC m=+22.443621874"
	Nov 20 22:28:19 no-preload-041029 kubelet[2025]: I1120 22:28:19.119430    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0fcde6b-aaaa-4f14-9417-59f3222dbed0-config-volume\") pod \"coredns-66bc5c9577-6dbgj\" (UID: \"c0fcde6b-aaaa-4f14-9417-59f3222dbed0\") " pod="kube-system/coredns-66bc5c9577-6dbgj"
	Nov 20 22:28:19 no-preload-041029 kubelet[2025]: I1120 22:28:19.119490    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96xs\" (UniqueName: \"kubernetes.io/projected/c0fcde6b-aaaa-4f14-9417-59f3222dbed0-kube-api-access-g96xs\") pod \"coredns-66bc5c9577-6dbgj\" (UID: \"c0fcde6b-aaaa-4f14-9417-59f3222dbed0\") " pod="kube-system/coredns-66bc5c9577-6dbgj"
	Nov 20 22:28:19 no-preload-041029 kubelet[2025]: I1120 22:28:19.119540    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knswk\" (UniqueName: \"kubernetes.io/projected/48ce6d51-0b32-4396-9e66-ce78a12fe4da-kube-api-access-knswk\") pod \"storage-provisioner\" (UID: \"48ce6d51-0b32-4396-9e66-ce78a12fe4da\") " pod="kube-system/storage-provisioner"
	Nov 20 22:28:19 no-preload-041029 kubelet[2025]: I1120 22:28:19.119562    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/48ce6d51-0b32-4396-9e66-ce78a12fe4da-tmp\") pod \"storage-provisioner\" (UID: \"48ce6d51-0b32-4396-9e66-ce78a12fe4da\") " pod="kube-system/storage-provisioner"
	Nov 20 22:28:20 no-preload-041029 kubelet[2025]: I1120 22:28:20.077846    2025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6dbgj" podStartSLOduration=18.077826308 podStartE2EDuration="18.077826308s" podCreationTimestamp="2025-11-20 22:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:28:20.021821608 +0000 UTC m=+23.488169663" watchObservedRunningTime="2025-11-20 22:28:20.077826308 +0000 UTC m=+23.544174354"
	Nov 20 22:28:20 no-preload-041029 kubelet[2025]: I1120 22:28:20.134557    2025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.134538199 podStartE2EDuration="15.134538199s" podCreationTimestamp="2025-11-20 22:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 22:28:20.082187545 +0000 UTC m=+23.548535617" watchObservedRunningTime="2025-11-20 22:28:20.134538199 +0000 UTC m=+23.600886246"
	Nov 20 22:28:22 no-preload-041029 kubelet[2025]: I1120 22:28:22.252755    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vlx\" (UniqueName: \"kubernetes.io/projected/d5c2a308-e94e-47c2-ae54-0a65575a7220-kube-api-access-j4vlx\") pod \"busybox\" (UID: \"d5c2a308-e94e-47c2-ae54-0a65575a7220\") " pod="default/busybox"
	Nov 20 22:28:22 no-preload-041029 kubelet[2025]: W1120 22:28:22.449434    2025 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/crio-63bee0e07964addcd5031d619e153d02d7b8115290cb8c1c2ec68ffe2bff9de4 WatchSource:0}: Error finding container 63bee0e07964addcd5031d619e153d02d7b8115290cb8c1c2ec68ffe2bff9de4: Status 404 returned error can't find the container with id 63bee0e07964addcd5031d619e153d02d7b8115290cb8c1c2ec68ffe2bff9de4
	
	
	==> storage-provisioner [19f5daf878154951bb7e92677c53af95c9c237051d4ed306c5d2e8b65ed2b8da] <==
	I1120 22:28:19.409269       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 22:28:19.461910       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:28:19.462024       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 22:28:19.464743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:19.492154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:28:19.492420       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:28:19.494845       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-041029_539d5aaa-9fe5-41db-9a7c-75d33a5bc992!
	I1120 22:28:19.496005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"415b729b-7223-449b-a0a8-421bccd3a052", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-041029_539d5aaa-9fe5-41db-9a7c-75d33a5bc992 became leader
	W1120 22:28:19.496179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:19.507985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:28:19.595048       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-041029_539d5aaa-9fe5-41db-9a7c-75d33a5bc992!
	W1120 22:28:21.511935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:21.524186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:23.527568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:23.534725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:25.538322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:25.549024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:27.553393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:27.562475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:29.566182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:29.573673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:31.577672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:28:31.587679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-041029 -n no-preload-041029
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-041029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-135623 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-135623 --alsologtostderr -v=1: exit status 80 (2.39429223s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-135623 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 22:28:35.967602 1048541 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:28:35.967801 1048541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:35.967827 1048541 out.go:374] Setting ErrFile to fd 2...
	I1120 22:28:35.967847 1048541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:35.968668 1048541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:28:35.968976 1048541 out.go:368] Setting JSON to false
	I1120 22:28:35.969005 1048541 mustload.go:66] Loading cluster: newest-cni-135623
	I1120 22:28:35.969558 1048541 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:35.970041 1048541 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:35.989400 1048541 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:35.989719 1048541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:36.058635 1048541 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-20 22:28:36.046326703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:36.059550 1048541 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-135623 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 22:28:36.065122 1048541 out.go:179] * Pausing node newest-cni-135623 ... 
	I1120 22:28:36.067937 1048541 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:36.068295 1048541 ssh_runner.go:195] Run: systemctl --version
	I1120 22:28:36.068348 1048541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:36.087376 1048541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:36.190451 1048541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:28:36.203432 1048541 pause.go:52] kubelet running: true
	I1120 22:28:36.203504 1048541 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:28:36.374574 1048541 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:28:36.374706 1048541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:28:36.446186 1048541 cri.go:89] found id: "e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d"
	I1120 22:28:36.446209 1048541 cri.go:89] found id: "2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623"
	I1120 22:28:36.446215 1048541 cri.go:89] found id: "426da4579a571a9ffcb380b31c748bfb7455704b87ed67ee995cb8979390b132"
	I1120 22:28:36.446219 1048541 cri.go:89] found id: "994060783e1c97d7c1c09724f225c297f94952fd74555ef5c60df0c2669377d3"
	I1120 22:28:36.446222 1048541 cri.go:89] found id: "059409635a2cb5c5a2351453976d3a7badf182fd048d97402160335d0f15c448"
	I1120 22:28:36.446225 1048541 cri.go:89] found id: "c4c11b2d5f9de615c1362209a3d4e356df8a02d81b014351af5ee3d564d65f59"
	I1120 22:28:36.446229 1048541 cri.go:89] found id: ""
	I1120 22:28:36.446281 1048541 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:28:36.458540 1048541 retry.go:31] will retry after 191.409873ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:36Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:28:36.651048 1048541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:28:36.664205 1048541 pause.go:52] kubelet running: false
	I1120 22:28:36.664298 1048541 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:28:36.813003 1048541 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:28:36.813111 1048541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:28:36.913249 1048541 cri.go:89] found id: "e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d"
	I1120 22:28:36.913328 1048541 cri.go:89] found id: "2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623"
	I1120 22:28:36.913357 1048541 cri.go:89] found id: "426da4579a571a9ffcb380b31c748bfb7455704b87ed67ee995cb8979390b132"
	I1120 22:28:36.913386 1048541 cri.go:89] found id: "994060783e1c97d7c1c09724f225c297f94952fd74555ef5c60df0c2669377d3"
	I1120 22:28:36.913408 1048541 cri.go:89] found id: "059409635a2cb5c5a2351453976d3a7badf182fd048d97402160335d0f15c448"
	I1120 22:28:36.913443 1048541 cri.go:89] found id: "c4c11b2d5f9de615c1362209a3d4e356df8a02d81b014351af5ee3d564d65f59"
	I1120 22:28:36.913471 1048541 cri.go:89] found id: ""
	I1120 22:28:36.913558 1048541 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:28:36.925940 1048541 retry.go:31] will retry after 429.671106ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:36Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:28:37.356726 1048541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:28:37.369465 1048541 pause.go:52] kubelet running: false
	I1120 22:28:37.369542 1048541 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:28:37.592702 1048541 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:28:37.592796 1048541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:28:37.682996 1048541 cri.go:89] found id: "e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d"
	I1120 22:28:37.683021 1048541 cri.go:89] found id: "2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623"
	I1120 22:28:37.683035 1048541 cri.go:89] found id: "426da4579a571a9ffcb380b31c748bfb7455704b87ed67ee995cb8979390b132"
	I1120 22:28:37.683040 1048541 cri.go:89] found id: "994060783e1c97d7c1c09724f225c297f94952fd74555ef5c60df0c2669377d3"
	I1120 22:28:37.683043 1048541 cri.go:89] found id: "059409635a2cb5c5a2351453976d3a7badf182fd048d97402160335d0f15c448"
	I1120 22:28:37.683047 1048541 cri.go:89] found id: "c4c11b2d5f9de615c1362209a3d4e356df8a02d81b014351af5ee3d564d65f59"
	I1120 22:28:37.683050 1048541 cri.go:89] found id: ""
	I1120 22:28:37.683125 1048541 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:28:37.697487 1048541 retry.go:31] will retry after 351.110452ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:37Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:28:38.048842 1048541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:28:38.062712 1048541 pause.go:52] kubelet running: false
	I1120 22:28:38.062780 1048541 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:28:38.205267 1048541 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:28:38.205401 1048541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:28:38.270558 1048541 cri.go:89] found id: "e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d"
	I1120 22:28:38.270590 1048541 cri.go:89] found id: "2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623"
	I1120 22:28:38.270595 1048541 cri.go:89] found id: "426da4579a571a9ffcb380b31c748bfb7455704b87ed67ee995cb8979390b132"
	I1120 22:28:38.270599 1048541 cri.go:89] found id: "994060783e1c97d7c1c09724f225c297f94952fd74555ef5c60df0c2669377d3"
	I1120 22:28:38.270603 1048541 cri.go:89] found id: "059409635a2cb5c5a2351453976d3a7badf182fd048d97402160335d0f15c448"
	I1120 22:28:38.270606 1048541 cri.go:89] found id: "c4c11b2d5f9de615c1362209a3d4e356df8a02d81b014351af5ee3d564d65f59"
	I1120 22:28:38.270610 1048541 cri.go:89] found id: ""
	I1120 22:28:38.270669 1048541 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:28:38.285165 1048541 out.go:203] 
	W1120 22:28:38.288120 1048541 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 22:28:38.288192 1048541 out.go:285] * 
	* 
	W1120 22:28:38.296827 1048541 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 22:28:38.299601 1048541 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-135623 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-135623
helpers_test.go:243: (dbg) docker inspect newest-cni-135623:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084",
	        "Created": "2025-11-20T22:27:40.188334711Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1046187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:28:19.901161293Z",
	            "FinishedAt": "2025-11-20T22:28:18.858774786Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/hostname",
	        "HostsPath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/hosts",
	        "LogPath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084-json.log",
	        "Name": "/newest-cni-135623",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-135623:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-135623",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084",
	                "LowerDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-135623",
	                "Source": "/var/lib/docker/volumes/newest-cni-135623/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-135623",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-135623",
	                "name.minikube.sigs.k8s.io": "newest-cni-135623",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d00759dcfd2326940f2cc27a856ad67c0bfebd0b53558fdd995000d56de3bc9",
	            "SandboxKey": "/var/run/docker/netns/6d00759dcfd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34198"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34201"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34199"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34200"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-135623": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:01:82:5e:cf:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "384cacf95f51a5dca0506b04f083a5c52691e66165cd46827abd11d3e9dc7c6a",
	                    "EndpointID": "dd38260ce197a64da254acf1bcf6777179283e36ec5a133e1a59462a5465c51b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-135623",
	                        "22d262387b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-135623 -n newest-cni-135623
E1120 22:28:38.578447  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-135623 -n newest-cni-135623: exit status 2 (331.87217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-135623 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-135623 logs -n 25: (1.059719036s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ stop    │ -p embed-certs-270206 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-270206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:27 UTC │
	│ image   │ default-k8s-diff-port-559701 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p disable-driver-mounts-305138                                                                                                                                                                                                               │ disable-driver-mounts-305138 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ image   │ embed-certs-270206 image list --format=json                                                                                                                                                                                                   │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ pause   │ -p embed-certs-270206 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-135623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ stop    │ -p newest-cni-135623 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-135623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-041029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ stop    │ -p no-preload-041029 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ image   │ newest-cni-135623 image list --format=json                                                                                                                                                                                                    │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ pause   │ -p newest-cni-135623 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:28:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:28:19.608763 1046058 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:28:19.609016 1046058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:19.609046 1046058 out.go:374] Setting ErrFile to fd 2...
	I1120 22:28:19.609064 1046058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:19.609376 1046058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:28:19.609928 1046058 out.go:368] Setting JSON to false
	I1120 22:28:19.611285 1046058 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18625,"bootTime":1763659075,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:28:19.611397 1046058 start.go:143] virtualization:  
	I1120 22:28:19.614494 1046058 out.go:179] * [newest-cni-135623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:28:19.618558 1046058 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:28:19.618754 1046058 notify.go:221] Checking for updates...
	I1120 22:28:19.624547 1046058 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:28:19.627376 1046058 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:19.631107 1046058 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:28:19.634185 1046058 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:28:19.637147 1046058 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:28:19.640555 1046058 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:19.641122 1046058 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:28:19.684060 1046058 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:28:19.684178 1046058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:19.750922 1046058 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:28:19.741777755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:19.751146 1046058 docker.go:319] overlay module found
	I1120 22:28:19.754305 1046058 out.go:179] * Using the docker driver based on existing profile
	I1120 22:28:19.757094 1046058 start.go:309] selected driver: docker
	I1120 22:28:19.757115 1046058 start.go:930] validating driver "docker" against &{Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:19.757220 1046058 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:28:19.757935 1046058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:19.812626 1046058 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:28:19.803819677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:19.812991 1046058 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 22:28:19.813026 1046058 cni.go:84] Creating CNI manager for ""
	I1120 22:28:19.813080 1046058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:19.813118 1046058 start.go:353] cluster config:
	{Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:19.818047 1046058 out.go:179] * Starting "newest-cni-135623" primary control-plane node in "newest-cni-135623" cluster
	I1120 22:28:19.820913 1046058 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:28:19.823836 1046058 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:28:19.826698 1046058 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:19.826751 1046058 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:28:19.826761 1046058 cache.go:65] Caching tarball of preloaded images
	I1120 22:28:19.826788 1046058 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:28:19.826856 1046058 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:28:19.826867 1046058 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:28:19.827009 1046058 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/config.json ...
	I1120 22:28:19.846362 1046058 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:28:19.846385 1046058 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:28:19.846420 1046058 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:28:19.846446 1046058 start.go:360] acquireMachinesLock for newest-cni-135623: {Name:mk0a4bf77fbaa33e901b00e572e51831d9de02c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:19.846515 1046058 start.go:364] duration metric: took 47.221µs to acquireMachinesLock for "newest-cni-135623"
	I1120 22:28:19.846544 1046058 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:28:19.846555 1046058 fix.go:54] fixHost starting: 
	I1120 22:28:19.846863 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:19.863822 1046058 fix.go:112] recreateIfNeeded on newest-cni-135623: state=Stopped err=<nil>
	W1120 22:28:19.863860 1046058 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 22:28:15.948116 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	W1120 22:28:18.445599 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	I1120 22:28:18.947015 1038356 node_ready.go:49] node "no-preload-041029" is "Ready"
	I1120 22:28:18.947044 1038356 node_ready.go:38] duration metric: took 14.004801487s for node "no-preload-041029" to be "Ready" ...
	I1120 22:28:18.947057 1038356 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:28:18.947112 1038356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:28:18.973900 1038356 api_server.go:72] duration metric: took 16.338544725s to wait for apiserver process to appear ...
	I1120 22:28:18.973965 1038356 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:28:18.973994 1038356 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:28:18.990038 1038356 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:28:18.993887 1038356 api_server.go:141] control plane version: v1.34.1
	I1120 22:28:18.993913 1038356 api_server.go:131] duration metric: took 19.939104ms to wait for apiserver health ...
	I1120 22:28:18.993921 1038356 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:28:19.004685 1038356 system_pods.go:59] 8 kube-system pods found
	I1120 22:28:19.004784 1038356 system_pods.go:61] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending
	I1120 22:28:19.004806 1038356 system_pods.go:61] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.004839 1038356 system_pods.go:61] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.004868 1038356 system_pods.go:61] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.004890 1038356 system_pods.go:61] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.004911 1038356 system_pods.go:61] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.004943 1038356 system_pods.go:61] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.004964 1038356 system_pods.go:61] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending
	I1120 22:28:19.004986 1038356 system_pods.go:74] duration metric: took 11.057947ms to wait for pod list to return data ...
	I1120 22:28:19.005008 1038356 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:28:19.009645 1038356 default_sa.go:45] found service account: "default"
	I1120 22:28:19.009670 1038356 default_sa.go:55] duration metric: took 4.640199ms for default service account to be created ...
	I1120 22:28:19.009680 1038356 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:28:19.017280 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:19.017308 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending
	I1120 22:28:19.017314 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.017319 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.017323 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.017326 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.017330 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.017333 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.017346 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:28:19.017366 1038356 retry.go:31] will retry after 288.297903ms: missing components: kube-dns
	I1120 22:28:19.317916 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:19.317956 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:28:19.317963 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.317970 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.317974 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.317979 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.317983 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.317987 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.317995 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:28:19.318009 1038356 retry.go:31] will retry after 387.681454ms: missing components: kube-dns
	I1120 22:28:19.711340 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:19.711374 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:28:19.711382 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.711388 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.711393 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.711398 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.711401 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.711411 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.711417 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:28:19.711431 1038356 retry.go:31] will retry after 439.187632ms: missing components: kube-dns
	I1120 22:28:20.214740 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:20.214772 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Running
	I1120 22:28:20.214778 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:20.214783 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:20.214787 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:20.214792 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:20.214797 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:20.214801 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:20.214804 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Running
	I1120 22:28:20.214811 1038356 system_pods.go:126] duration metric: took 1.205126223s to wait for k8s-apps to be running ...
	I1120 22:28:20.214818 1038356 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:28:20.214872 1038356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:28:20.237045 1038356 system_svc.go:56] duration metric: took 22.216114ms WaitForService to wait for kubelet
	I1120 22:28:20.237071 1038356 kubeadm.go:587] duration metric: took 17.601722336s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:28:20.237090 1038356 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:28:20.249880 1038356 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:28:20.249909 1038356 node_conditions.go:123] node cpu capacity is 2
	I1120 22:28:20.249922 1038356 node_conditions.go:105] duration metric: took 12.825773ms to run NodePressure ...
	I1120 22:28:20.249934 1038356 start.go:242] waiting for startup goroutines ...
	I1120 22:28:20.249942 1038356 start.go:247] waiting for cluster config update ...
	I1120 22:28:20.249952 1038356 start.go:256] writing updated cluster config ...
	I1120 22:28:20.250241 1038356 ssh_runner.go:195] Run: rm -f paused
	I1120 22:28:20.254779 1038356 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:28:20.266794 1038356 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.274702 1038356 pod_ready.go:94] pod "coredns-66bc5c9577-6dbgj" is "Ready"
	I1120 22:28:20.274726 1038356 pod_ready.go:86] duration metric: took 7.908483ms for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.278017 1038356 pod_ready.go:83] waiting for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.285637 1038356 pod_ready.go:94] pod "etcd-no-preload-041029" is "Ready"
	I1120 22:28:20.285660 1038356 pod_ready.go:86] duration metric: took 7.62171ms for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.289274 1038356 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.299565 1038356 pod_ready.go:94] pod "kube-apiserver-no-preload-041029" is "Ready"
	I1120 22:28:20.299634 1038356 pod_ready.go:86] duration metric: took 10.333794ms for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.303953 1038356 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.661205 1038356 pod_ready.go:94] pod "kube-controller-manager-no-preload-041029" is "Ready"
	I1120 22:28:20.661282 1038356 pod_ready.go:86] duration metric: took 357.252156ms for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.860905 1038356 pod_ready.go:83] waiting for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.259972 1038356 pod_ready.go:94] pod "kube-proxy-n78zb" is "Ready"
	I1120 22:28:21.260000 1038356 pod_ready.go:86] duration metric: took 399.071073ms for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.461389 1038356 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.860332 1038356 pod_ready.go:94] pod "kube-scheduler-no-preload-041029" is "Ready"
	I1120 22:28:21.860358 1038356 pod_ready.go:86] duration metric: took 398.939928ms for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.860370 1038356 pod_ready.go:40] duration metric: took 1.605560127s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:28:21.916256 1038356 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:28:21.919813 1038356 out.go:179] * Done! kubectl is now configured to use "no-preload-041029" cluster and "default" namespace by default
	I1120 22:28:19.867117 1046058 out.go:252] * Restarting existing docker container for "newest-cni-135623" ...
	I1120 22:28:19.867221 1046058 cli_runner.go:164] Run: docker start newest-cni-135623
	I1120 22:28:20.167549 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:20.194360 1046058 kic.go:430] container "newest-cni-135623" state is running.
	I1120 22:28:20.194747 1046058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:28:20.231080 1046058 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/config.json ...
	I1120 22:28:20.231352 1046058 machine.go:94] provisionDockerMachine start ...
	I1120 22:28:20.231417 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:20.264515 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:20.269131 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:20.269155 1046058 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:28:20.270246 1046058 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:28:23.414799 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-135623
	
	I1120 22:28:23.414831 1046058 ubuntu.go:182] provisioning hostname "newest-cni-135623"
	I1120 22:28:23.414897 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:23.433748 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:23.434079 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:23.434094 1046058 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-135623 && echo "newest-cni-135623" | sudo tee /etc/hostname
	I1120 22:28:23.601694 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-135623
	
	I1120 22:28:23.601827 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:23.621179 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:23.621492 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:23.621514 1046058 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-135623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-135623/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-135623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:28:23.775228 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:28:23.775255 1046058 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:28:23.775302 1046058 ubuntu.go:190] setting up certificates
	I1120 22:28:23.775316 1046058 provision.go:84] configureAuth start
	I1120 22:28:23.775412 1046058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:28:23.792924 1046058 provision.go:143] copyHostCerts
	I1120 22:28:23.792997 1046058 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:28:23.793017 1046058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:28:23.793095 1046058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:28:23.793212 1046058 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:28:23.793226 1046058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:28:23.793255 1046058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:28:23.793312 1046058 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:28:23.793322 1046058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:28:23.793347 1046058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:28:23.793400 1046058 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.newest-cni-135623 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-135623]
	I1120 22:28:24.175067 1046058 provision.go:177] copyRemoteCerts
	I1120 22:28:24.175135 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:28:24.175185 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.195224 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:24.300104 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:28:24.321466 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 22:28:24.348586 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 22:28:24.371336 1046058 provision.go:87] duration metric: took 595.971597ms to configureAuth
	I1120 22:28:24.371364 1046058 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:28:24.371566 1046058 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:24.371675 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.391446 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:24.391762 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:24.391782 1046058 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:28:24.739459 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:28:24.739483 1046058 machine.go:97] duration metric: took 4.508119608s to provisionDockerMachine
	I1120 22:28:24.739495 1046058 start.go:293] postStartSetup for "newest-cni-135623" (driver="docker")
	I1120 22:28:24.739506 1046058 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:28:24.739587 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:28:24.739641 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.756979 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:24.860012 1046058 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:28:24.863669 1046058 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:28:24.863700 1046058 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:28:24.863712 1046058 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:28:24.863777 1046058 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:28:24.863878 1046058 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:28:24.863998 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:28:24.871985 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:24.890430 1046058 start.go:296] duration metric: took 150.918846ms for postStartSetup
	I1120 22:28:24.890571 1046058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:28:24.890616 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.908123 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:25.013420 1046058 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:28:25.019768 1046058 fix.go:56] duration metric: took 5.17320429s for fixHost
	I1120 22:28:25.019805 1046058 start.go:83] releasing machines lock for "newest-cni-135623", held for 5.173274428s
	I1120 22:28:25.019883 1046058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:28:25.040360 1046058 ssh_runner.go:195] Run: cat /version.json
	I1120 22:28:25.040420 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:25.040476 1046058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:28:25.040614 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:25.064095 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:25.071097 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:25.166635 1046058 ssh_runner.go:195] Run: systemctl --version
	I1120 22:28:25.263474 1046058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:28:25.301612 1046058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:28:25.305732 1046058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:28:25.305810 1046058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:28:25.313475 1046058 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:28:25.313550 1046058 start.go:496] detecting cgroup driver to use...
	I1120 22:28:25.313597 1046058 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:28:25.313651 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:28:25.328863 1046058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:28:25.342166 1046058 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:28:25.342229 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:28:25.358110 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:28:25.371853 1046058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:28:25.487091 1046058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:28:25.613512 1046058 docker.go:234] disabling docker service ...
	I1120 22:28:25.613595 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:28:25.630096 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:28:25.645594 1046058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:28:25.776246 1046058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:28:25.888693 1046058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:28:25.901960 1046058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:28:25.917255 1046058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:28:25.917377 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.927084 1046058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:28:25.927198 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.936187 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.944988 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.953615 1046058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:28:25.961745 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.971413 1046058 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.980044 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.988745 1046058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:28:25.996452 1046058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:28:26.004915 1046058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:26.122045 1046058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:28:26.307050 1046058 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:28:26.307196 1046058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:28:26.311586 1046058 start.go:564] Will wait 60s for crictl version
	I1120 22:28:26.311707 1046058 ssh_runner.go:195] Run: which crictl
	I1120 22:28:26.315838 1046058 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:28:26.343825 1046058 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:28:26.344002 1046058 ssh_runner.go:195] Run: crio --version
	I1120 22:28:26.372720 1046058 ssh_runner.go:195] Run: crio --version
	I1120 22:28:26.405777 1046058 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:28:26.408743 1046058 cli_runner.go:164] Run: docker network inspect newest-cni-135623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:28:26.425613 1046058 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 22:28:26.429809 1046058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:26.443060 1046058 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 22:28:26.445993 1046058 kubeadm.go:884] updating cluster {Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:28:26.446166 1046058 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:26.446252 1046058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:26.484434 1046058 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:26.484459 1046058 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:28:26.484521 1046058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:26.510217 1046058 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:26.510243 1046058 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:28:26.510251 1046058 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1120 22:28:26.510396 1046058 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-135623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:28:26.510527 1046058 ssh_runner.go:195] Run: crio config
	I1120 22:28:26.590324 1046058 cni.go:84] Creating CNI manager for ""
	I1120 22:28:26.590350 1046058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:26.590372 1046058 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 22:28:26.592701 1046058 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-135623 NodeName:newest-cni-135623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:28:26.592862 1046058 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-135623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:28:26.592938 1046058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:28:26.608056 1046058 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:28:26.608135 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:28:26.616237 1046058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1120 22:28:26.629637 1046058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:28:26.642733 1046058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1120 22:28:26.655998 1046058 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:28:26.659708 1046058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:26.677753 1046058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:26.801819 1046058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:26.819744 1046058 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623 for IP: 192.168.76.2
	I1120 22:28:26.819766 1046058 certs.go:195] generating shared ca certs ...
	I1120 22:28:26.819783 1046058 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:26.819916 1046058 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:28:26.819968 1046058 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:28:26.819981 1046058 certs.go:257] generating profile certs ...
	I1120 22:28:26.820068 1046058 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/client.key
	I1120 22:28:26.820138 1046058 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key.0fed1dd1
	I1120 22:28:26.820212 1046058 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key
	I1120 22:28:26.820326 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:28:26.820361 1046058 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:28:26.820373 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:28:26.820398 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:28:26.820424 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:28:26.820447 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:28:26.820499 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:26.821136 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:28:26.845858 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:28:26.866347 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:28:26.890289 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:28:26.915043 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 22:28:26.948865 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:28:26.989139 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:28:27.013019 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:28:27.042486 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:28:27.063401 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:28:27.083564 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:28:27.102595 1046058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:28:27.115311 1046058 ssh_runner.go:195] Run: openssl version
	I1120 22:28:27.122767 1046058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.133554 1046058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:28:27.142802 1046058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.147578 1046058 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.147652 1046058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.190059 1046058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:28:27.198365 1046058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.205986 1046058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:28:27.213966 1046058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.217915 1046058 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.217991 1046058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.260086 1046058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:28:27.268008 1046058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.275695 1046058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:28:27.283799 1046058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.287828 1046058 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.287937 1046058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.329873 1046058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:28:27.337431 1046058 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:28:27.341283 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:28:27.382524 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:28:27.424356 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:28:27.475683 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:28:27.527122 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:28:27.595186 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:28:27.659874 1046058 kubeadm.go:401] StartCluster: {Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:27.660023 1046058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:28:27.660125 1046058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:28:27.731473 1046058 cri.go:89] found id: "994060783e1c97d7c1c09724f225c297f94952fd74555ef5c60df0c2669377d3"
	I1120 22:28:27.731540 1046058 cri.go:89] found id: "059409635a2cb5c5a2351453976d3a7badf182fd048d97402160335d0f15c448"
	I1120 22:28:27.731559 1046058 cri.go:89] found id: "c4c11b2d5f9de615c1362209a3d4e356df8a02d81b014351af5ee3d564d65f59"
	I1120 22:28:27.731580 1046058 cri.go:89] found id: ""
	I1120 22:28:27.731684 1046058 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:28:27.759544 1046058 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:27Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:28:27.759694 1046058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:28:27.776624 1046058 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:28:27.776687 1046058 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:28:27.776793 1046058 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:28:27.790113 1046058 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:28:27.790746 1046058 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-135623" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:27.791071 1046058 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-135623" cluster setting kubeconfig missing "newest-cni-135623" context setting]
	I1120 22:28:27.791595 1046058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:27.793271 1046058 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:28:27.803864 1046058 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1120 22:28:27.803937 1046058 kubeadm.go:602] duration metric: took 27.22293ms to restartPrimaryControlPlane
	I1120 22:28:27.803960 1046058 kubeadm.go:403] duration metric: took 144.09676ms to StartCluster
	I1120 22:28:27.804005 1046058 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:27.804084 1046058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:27.805018 1046058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:27.805290 1046058 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:28:27.805671 1046058 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:28:27.805740 1046058 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-135623"
	I1120 22:28:27.805754 1046058 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-135623"
	W1120 22:28:27.805760 1046058 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:28:27.805781 1046058 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:27.806246 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.806640 1046058 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:27.806715 1046058 addons.go:70] Setting dashboard=true in profile "newest-cni-135623"
	I1120 22:28:27.806754 1046058 addons.go:239] Setting addon dashboard=true in "newest-cni-135623"
	W1120 22:28:27.806779 1046058 addons.go:248] addon dashboard should already be in state true
	I1120 22:28:27.806816 1046058 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:27.807269 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.809253 1046058 addons.go:70] Setting default-storageclass=true in profile "newest-cni-135623"
	I1120 22:28:27.809286 1046058 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-135623"
	I1120 22:28:27.809631 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.817886 1046058 out.go:179] * Verifying Kubernetes components...
	I1120 22:28:27.821328 1046058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:27.868711 1046058 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:28:27.868809 1046058 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:28:27.872790 1046058 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:28:27.872909 1046058 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:27.872920 1046058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:28:27.872988 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:27.875945 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:28:27.875972 1046058 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:28:27.876044 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:27.877435 1046058 addons.go:239] Setting addon default-storageclass=true in "newest-cni-135623"
	W1120 22:28:27.877465 1046058 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:28:27.877492 1046058 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:27.877947 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.921797 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:27.940671 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:27.947918 1046058 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:27.947941 1046058 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:28:27.948007 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:27.980681 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:28.169240 1046058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:28.211836 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:28:28.211859 1046058 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:28:28.212433 1046058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:28.244789 1046058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:28.316905 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:28:28.316932 1046058 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:28:28.383041 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:28:28.383070 1046058 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:28:28.476984 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:28:28.477008 1046058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:28:28.504663 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:28:28.504709 1046058 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:28:28.527249 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:28:28.527276 1046058 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:28:28.548629 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:28:28.548669 1046058 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:28:28.569841 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:28:28.569869 1046058 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:28:28.588156 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:28:28.588203 1046058 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:28:28.611754 1046058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:28:34.884888 1046058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.715562649s)
	I1120 22:28:34.884933 1046058 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.672485041s)
	I1120 22:28:34.884971 1046058 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:28:34.885028 1046058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:28:34.885101 1046058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.6402453s)
	I1120 22:28:34.885455 1046058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.273653897s)
	I1120 22:28:34.888553 1046058 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-135623 addons enable metrics-server
	
	I1120 22:28:34.904008 1046058 api_server.go:72] duration metric: took 7.09865782s to wait for apiserver process to appear ...
	I1120 22:28:34.904079 1046058 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:28:34.904114 1046058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1120 22:28:34.914071 1046058 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1120 22:28:34.915150 1046058 api_server.go:141] control plane version: v1.34.1
	I1120 22:28:34.915180 1046058 api_server.go:131] duration metric: took 11.07846ms to wait for apiserver health ...
	I1120 22:28:34.915190 1046058 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:28:34.916538 1046058 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1120 22:28:34.918689 1046058 system_pods.go:59] 8 kube-system pods found
	I1120 22:28:34.918728 1046058 system_pods.go:61] "coredns-66bc5c9577-9flb9" [3dc2f756-6d87-4c6c-a277-f78afd3dee9d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 22:28:34.918770 1046058 system_pods.go:61] "etcd-newest-cni-135623" [0de7f3f2-008e-4d81-9d64-817f1d6baac9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:28:34.918786 1046058 system_pods.go:61] "kindnet-qnvsk" [f7a38583-b1d7-4129-ad46-dd3ccb7319eb] Running
	I1120 22:28:34.918794 1046058 system_pods.go:61] "kube-apiserver-newest-cni-135623" [d04f855f-e0d5-4f66-8479-486e7801a0c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:28:34.918803 1046058 system_pods.go:61] "kube-controller-manager-newest-cni-135623" [216bbe7c-632b-4b80-bc44-3198afcc3979] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:28:34.918812 1046058 system_pods.go:61] "kube-proxy-8cqbf" [0c0b8be5-8252-4341-b19a-5270b86a2b1d] Running
	I1120 22:28:34.918856 1046058 system_pods.go:61] "kube-scheduler-newest-cni-135623" [8d3fed71-fe6a-4425-ad2d-c37cd0c2de1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:28:34.918868 1046058 system_pods.go:61] "storage-provisioner" [21cbba0f-bc0e-4982-a846-6b4daa0506ba] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 22:28:34.918876 1046058 system_pods.go:74] duration metric: took 3.680252ms to wait for pod list to return data ...
	I1120 22:28:34.918893 1046058 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:28:34.919479 1046058 addons.go:515] duration metric: took 7.113790518s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 22:28:34.921743 1046058 default_sa.go:45] found service account: "default"
	I1120 22:28:34.921770 1046058 default_sa.go:55] duration metric: took 2.870905ms for default service account to be created ...
	I1120 22:28:34.921783 1046058 kubeadm.go:587] duration metric: took 7.116439891s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 22:28:34.921801 1046058 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:28:34.924646 1046058 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:28:34.924680 1046058 node_conditions.go:123] node cpu capacity is 2
	I1120 22:28:34.924692 1046058 node_conditions.go:105] duration metric: took 2.885649ms to run NodePressure ...
	I1120 22:28:34.924705 1046058 start.go:242] waiting for startup goroutines ...
	I1120 22:28:34.924713 1046058 start.go:247] waiting for cluster config update ...
	I1120 22:28:34.924725 1046058 start.go:256] writing updated cluster config ...
	I1120 22:28:34.925037 1046058 ssh_runner.go:195] Run: rm -f paused
	I1120 22:28:35.005977 1046058 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:28:35.009373 1046058 out.go:179] * Done! kubectl is now configured to use "newest-cni-135623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.255924235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.27713583Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-8cqbf/POD" id=a62e3f45-8587-42d6-a555-3b1efb5923e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.277424474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.310914045Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a62e3f45-8587-42d6-a555-3b1efb5923e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.313913985Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=81ecb272-1949-4222-9942-5d43e9101799 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.331956867Z" level=info msg="Ran pod sandbox 31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2 with infra container: kube-system/kube-proxy-8cqbf/POD" id=a62e3f45-8587-42d6-a555-3b1efb5923e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.33322995Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=76b7fba9-a07c-4e77-845d-da8108caebb9 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.340238828Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3087d229-87c9-4d6e-a514-36596b1d8bc3 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.342391217Z" level=info msg="Ran pod sandbox 096f426d57e93a46ceeb1b38363bc3d80bedebabfd3ba31b30171c91bb7da929 with infra container: kube-system/kindnet-qnvsk/POD" id=81ecb272-1949-4222-9942-5d43e9101799 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.350922879Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=77bba886-d5cc-4979-b199-f62505c4d9ba name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.358055451Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=34f70d43-7cb5-44e5-b072-59b9e8b5a4f3 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.35857092Z" level=info msg="Creating container: kube-system/kube-proxy-8cqbf/kube-proxy" id=8ad7fce9-d700-4388-8b2e-4a44408a9fcb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.359428695Z" level=info msg="Creating container: kube-system/kindnet-qnvsk/kindnet-cni" id=f2210141-dc26-4a52-b077-546c1fd59103 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.359526362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.364311178Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.370514573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.376797575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.388152863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.396011081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.438268838Z" level=info msg="Created container 2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623: kube-system/kindnet-qnvsk/kindnet-cni" id=f2210141-dc26-4a52-b077-546c1fd59103 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.440519837Z" level=info msg="Starting container: 2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623" id=2f04697b-4b97-4c39-94f8-298cec6643d7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.44351122Z" level=info msg="Started container" PID=1070 containerID=2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623 description=kube-system/kindnet-qnvsk/kindnet-cni id=2f04697b-4b97-4c39-94f8-298cec6643d7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=096f426d57e93a46ceeb1b38363bc3d80bedebabfd3ba31b30171c91bb7da929
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.491414325Z" level=info msg="Created container e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d: kube-system/kube-proxy-8cqbf/kube-proxy" id=8ad7fce9-d700-4388-8b2e-4a44408a9fcb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.492308228Z" level=info msg="Starting container: e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d" id=3fee2c28-05df-427e-b9b1-d3e6b9de38b2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.495587097Z" level=info msg="Started container" PID=1074 containerID=e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d description=kube-system/kube-proxy-8cqbf/kube-proxy id=3fee2c28-05df-427e-b9b1-d3e6b9de38b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e5f4c321d3229       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   31d3a867b12c8       kube-proxy-8cqbf                            kube-system
	2111474ae1614       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   096f426d57e93       kindnet-qnvsk                               kube-system
	426da4579a571       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   db7d7cd74d689       kube-controller-manager-newest-cni-135623   kube-system
	994060783e1c9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   92cc01437d438       etcd-newest-cni-135623                      kube-system
	059409635a2cb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   154129792c860       kube-scheduler-newest-cni-135623            kube-system
	c4c11b2d5f9de       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   cb7138fad6c3e       kube-apiserver-newest-cni-135623            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-135623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-135623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=newest-cni-135623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_28_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:28:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-135623
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:28:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:28:32 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:28:32 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:28:32 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 20 Nov 2025 22:28:32 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-135623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                04e07bd9-c8a6-4d46-86ba-5a3653e3028d
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-135623                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         30s
	  kube-system                 kindnet-qnvsk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-135623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-135623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-8cqbf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-135623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 40s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 40s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node newest-cni-135623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientPID
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     30s                kubelet          Node newest-cni-135623 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-135623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  30s                kubelet          Node newest-cni-135623 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-135623 event: Registered Node newest-cni-135623 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-135623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-135623 event: Registered Node newest-cni-135623 in Controller
	
	
	==> dmesg <==
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	[Nov20 22:27] overlayfs: idmapped layers are currently not supported
	[ +44.355242] overlayfs: idmapped layers are currently not supported
	[Nov20 22:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [994060783e1c97d7c1c09724f225c297f94952fd74555ef5c60df0c2669377d3] <==
	{"level":"warn","ts":"2025-11-20T22:28:30.323242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.387342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.484476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.486146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.515346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.532013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.562687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.579878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.594156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.624946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.643218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.677575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.706657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.741715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.778535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.804540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.843449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.885166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.915361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.935304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.997923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:31.031846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:31.068393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:31.110285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:31.227049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34958","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:28:39 up  5:10,  0 user,  load average: 5.70, 4.08, 3.03
	Linux newest-cni-135623 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623] <==
	I1120 22:28:33.720631       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:28:33.737953       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 22:28:33.738137       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:28:33.738152       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:28:33.738169       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:28:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:28:33.915494       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:28:33.923482       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:28:33.923515       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:28:33.924018       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [c4c11b2d5f9de615c1362209a3d4e356df8a02d81b014351af5ee3d564d65f59] <==
	I1120 22:28:32.890650       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:28:32.893447       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:28:32.893646       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 22:28:32.893705       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 22:28:32.911405       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 22:28:32.911477       1 aggregator.go:171] initial CRD sync complete...
	I1120 22:28:32.911488       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 22:28:32.911495       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:28:32.911500       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:28:32.914334       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 22:28:32.915711       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 22:28:32.950283       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1120 22:28:32.966832       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 22:28:33.051797       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:28:33.225980       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:28:34.565625       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:28:34.669323       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:28:34.712690       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:28:34.731388       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:28:34.819366       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.61.167"}
	I1120 22:28:34.836706       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.219.112"}
	I1120 22:28:37.141530       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:28:37.293079       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 22:28:37.342213       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:28:37.446278       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [426da4579a571a9ffcb380b31c748bfb7455704b87ed67ee995cb8979390b132] <==
	I1120 22:28:36.886353       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 22:28:36.886597       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 22:28:36.893078       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 22:28:36.902712       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:28:36.911160       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 22:28:36.914719       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:28:36.914769       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:28:36.914778       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:28:36.915729       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 22:28:36.917310       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 22:28:36.922888       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1120 22:28:36.927255       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 22:28:36.934766       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 22:28:36.936033       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:28:36.936044       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 22:28:36.936078       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:28:36.937381       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 22:28:36.937546       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:28:36.937561       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:28:36.936093       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 22:28:36.936103       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:28:36.939851       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:28:36.940962       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 22:28:36.943260       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 22:28:36.943263       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	
	
	==> kube-proxy [e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d] <==
	I1120 22:28:34.204390       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:28:34.486205       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:28:34.592980       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:28:34.593018       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 22:28:34.593106       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:28:34.811585       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:28:34.811649       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:28:34.844258       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:28:34.844579       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:28:34.844602       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:28:34.846127       1 config.go:200] "Starting service config controller"
	I1120 22:28:34.846149       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:28:34.846166       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:28:34.846172       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:28:34.846205       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:28:34.846209       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:28:34.852085       1 config.go:309] "Starting node config controller"
	I1120 22:28:34.852111       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:28:34.852120       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:28:34.946877       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:28:34.946937       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:28:34.947038       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [059409635a2cb5c5a2351453976d3a7badf182fd048d97402160335d0f15c448] <==
	I1120 22:28:32.424932       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:28:32.432315       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:28:32.432439       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:28:32.435853       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:28:32.435931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 22:28:32.465476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 22:28:32.465739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 22:28:32.465881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 22:28:32.466030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 22:28:32.466239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 22:28:32.466371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 22:28:32.466475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 22:28:32.466582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 22:28:32.466719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 22:28:32.466870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 22:28:32.467013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 22:28:32.467386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 22:28:32.467501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 22:28:32.467574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 22:28:32.467632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 22:28:32.472487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 22:28:32.472704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 22:28:32.479798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 22:28:32.480040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1120 22:28:34.150888       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.924134     734 apiserver.go:52] "Watching apiserver"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.943351     734 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.971176     734 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-135623"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.971283     734 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-135623"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.971326     734 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.972211     734 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: E1120 22:28:32.982298     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-135623\" already exists" pod="kube-system/etcd-newest-cni-135623"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.988171     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-135623"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: E1120 22:28:32.988115     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-135623\" already exists" pod="kube-system/kube-scheduler-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.030928     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c0b8be5-8252-4341-b19a-5270b86a2b1d-xtables-lock\") pod \"kube-proxy-8cqbf\" (UID: \"0c0b8be5-8252-4341-b19a-5270b86a2b1d\") " pod="kube-system/kube-proxy-8cqbf"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031018     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-cni-cfg\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031042     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-lib-modules\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031069     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c0b8be5-8252-4341-b19a-5270b86a2b1d-lib-modules\") pod \"kube-proxy-8cqbf\" (UID: \"0c0b8be5-8252-4341-b19a-5270b86a2b1d\") " pod="kube-system/kube-proxy-8cqbf"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031091     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-xtables-lock\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: E1120 22:28:33.031968     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-135623\" already exists" pod="kube-system/kube-apiserver-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031990     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: E1120 22:28:33.071928     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-135623\" already exists" pod="kube-system/kube-controller-manager-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.072387     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.086185     734 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: E1120 22:28:33.112210     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-135623\" already exists" pod="kube-system/kube-scheduler-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: W1120 22:28:33.324702     734 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/crio-31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2 WatchSource:0}: Error finding container 31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2: Status 404 returned error can't find the container with id 31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2
	Nov 20 22:28:36 newest-cni-135623 kubelet[734]: I1120 22:28:36.365002     734 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 20 22:28:36 newest-cni-135623 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:28:36 newest-cni-135623 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:28:36 newest-cni-135623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-135623 -n newest-cni-135623
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-135623 -n newest-cni-135623: exit status 2 (350.826621ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-135623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-9flb9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gc8j2 kubernetes-dashboard-855c9754f9-qzzhv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-135623 describe pod coredns-66bc5c9577-9flb9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gc8j2 kubernetes-dashboard-855c9754f9-qzzhv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-135623 describe pod coredns-66bc5c9577-9flb9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gc8j2 kubernetes-dashboard-855c9754f9-qzzhv: exit status 1 (90.944775ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-9flb9" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-gc8j2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qzzhv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-135623 describe pod coredns-66bc5c9577-9flb9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gc8j2 kubernetes-dashboard-855c9754f9-qzzhv: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-135623
helpers_test.go:243: (dbg) docker inspect newest-cni-135623:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084",
	        "Created": "2025-11-20T22:27:40.188334711Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1046187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:28:19.901161293Z",
	            "FinishedAt": "2025-11-20T22:28:18.858774786Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/hostname",
	        "HostsPath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/hosts",
	        "LogPath": "/var/lib/docker/containers/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084-json.log",
	        "Name": "/newest-cni-135623",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-135623:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-135623",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084",
	                "LowerDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98128029ef487373356dba28830bdce8555ad0c2a2afcabdb6e3c502fc888edb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-135623",
	                "Source": "/var/lib/docker/volumes/newest-cni-135623/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-135623",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-135623",
	                "name.minikube.sigs.k8s.io": "newest-cni-135623",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d00759dcfd2326940f2cc27a856ad67c0bfebd0b53558fdd995000d56de3bc9",
	            "SandboxKey": "/var/run/docker/netns/6d00759dcfd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34198"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34201"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34199"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34200"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-135623": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:01:82:5e:cf:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "384cacf95f51a5dca0506b04f083a5c52691e66165cd46827abd11d3e9dc7c6a",
	                    "EndpointID": "dd38260ce197a64da254acf1bcf6777179283e36ec5a133e1a59462a5465c51b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-135623",
	                        "22d262387b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-135623 -n newest-cni-135623
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-135623 -n newest-cni-135623: exit status 2 (348.981116ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-135623 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-135623 logs -n 25: (1.073436702s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:25 UTC │
	│ start   │ -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:25 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ stop    │ -p embed-certs-270206 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-270206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ start   │ -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:27 UTC │
	│ image   │ default-k8s-diff-port-559701 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │ 20 Nov 25 22:26 UTC │
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p disable-driver-mounts-305138                                                                                                                                                                                                               │ disable-driver-mounts-305138 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ image   │ embed-certs-270206 image list --format=json                                                                                                                                                                                                   │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ pause   │ -p embed-certs-270206 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-135623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ stop    │ -p newest-cni-135623 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-135623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-041029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ stop    │ -p no-preload-041029 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ image   │ newest-cni-135623 image list --format=json                                                                                                                                                                                                    │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ pause   │ -p newest-cni-135623 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:28:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:28:19.608763 1046058 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:28:19.609016 1046058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:19.609046 1046058 out.go:374] Setting ErrFile to fd 2...
	I1120 22:28:19.609064 1046058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:19.609376 1046058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:28:19.609928 1046058 out.go:368] Setting JSON to false
	I1120 22:28:19.611285 1046058 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18625,"bootTime":1763659075,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:28:19.611397 1046058 start.go:143] virtualization:  
	I1120 22:28:19.614494 1046058 out.go:179] * [newest-cni-135623] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:28:19.618558 1046058 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:28:19.618754 1046058 notify.go:221] Checking for updates...
	I1120 22:28:19.624547 1046058 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:28:19.627376 1046058 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:19.631107 1046058 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:28:19.634185 1046058 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:28:19.637147 1046058 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:28:19.640555 1046058 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:19.641122 1046058 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:28:19.684060 1046058 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:28:19.684178 1046058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:19.750922 1046058 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:28:19.741777755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:19.751146 1046058 docker.go:319] overlay module found
	I1120 22:28:19.754305 1046058 out.go:179] * Using the docker driver based on existing profile
	I1120 22:28:19.757094 1046058 start.go:309] selected driver: docker
	I1120 22:28:19.757115 1046058 start.go:930] validating driver "docker" against &{Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:19.757220 1046058 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:28:19.757935 1046058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:19.812626 1046058 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 22:28:19.803819677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:19.812991 1046058 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 22:28:19.813026 1046058 cni.go:84] Creating CNI manager for ""
	I1120 22:28:19.813080 1046058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:19.813118 1046058 start.go:353] cluster config:
	{Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:19.818047 1046058 out.go:179] * Starting "newest-cni-135623" primary control-plane node in "newest-cni-135623" cluster
	I1120 22:28:19.820913 1046058 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:28:19.823836 1046058 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:28:19.826698 1046058 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:19.826751 1046058 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 22:28:19.826761 1046058 cache.go:65] Caching tarball of preloaded images
	I1120 22:28:19.826788 1046058 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:28:19.826856 1046058 preload.go:238] Found /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1120 22:28:19.826867 1046058 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 22:28:19.827009 1046058 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/config.json ...
	I1120 22:28:19.846362 1046058 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:28:19.846385 1046058 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:28:19.846420 1046058 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:28:19.846446 1046058 start.go:360] acquireMachinesLock for newest-cni-135623: {Name:mk0a4bf77fbaa33e901b00e572e51831d9de02c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:19.846515 1046058 start.go:364] duration metric: took 47.221µs to acquireMachinesLock for "newest-cni-135623"
	I1120 22:28:19.846544 1046058 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:28:19.846555 1046058 fix.go:54] fixHost starting: 
	I1120 22:28:19.846863 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:19.863822 1046058 fix.go:112] recreateIfNeeded on newest-cni-135623: state=Stopped err=<nil>
	W1120 22:28:19.863860 1046058 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 22:28:15.948116 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	W1120 22:28:18.445599 1038356 node_ready.go:57] node "no-preload-041029" has "Ready":"False" status (will retry)
	I1120 22:28:18.947015 1038356 node_ready.go:49] node "no-preload-041029" is "Ready"
	I1120 22:28:18.947044 1038356 node_ready.go:38] duration metric: took 14.004801487s for node "no-preload-041029" to be "Ready" ...
	I1120 22:28:18.947057 1038356 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:28:18.947112 1038356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:28:18.973900 1038356 api_server.go:72] duration metric: took 16.338544725s to wait for apiserver process to appear ...
	I1120 22:28:18.973965 1038356 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:28:18.973994 1038356 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:28:18.990038 1038356 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:28:18.993887 1038356 api_server.go:141] control plane version: v1.34.1
	I1120 22:28:18.993913 1038356 api_server.go:131] duration metric: took 19.939104ms to wait for apiserver health ...
	I1120 22:28:18.993921 1038356 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:28:19.004685 1038356 system_pods.go:59] 8 kube-system pods found
	I1120 22:28:19.004784 1038356 system_pods.go:61] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending
	I1120 22:28:19.004806 1038356 system_pods.go:61] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.004839 1038356 system_pods.go:61] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.004868 1038356 system_pods.go:61] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.004890 1038356 system_pods.go:61] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.004911 1038356 system_pods.go:61] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.004943 1038356 system_pods.go:61] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.004964 1038356 system_pods.go:61] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending
	I1120 22:28:19.004986 1038356 system_pods.go:74] duration metric: took 11.057947ms to wait for pod list to return data ...
	I1120 22:28:19.005008 1038356 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:28:19.009645 1038356 default_sa.go:45] found service account: "default"
	I1120 22:28:19.009670 1038356 default_sa.go:55] duration metric: took 4.640199ms for default service account to be created ...
	I1120 22:28:19.009680 1038356 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:28:19.017280 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:19.017308 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending
	I1120 22:28:19.017314 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.017319 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.017323 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.017326 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.017330 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.017333 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.017346 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:28:19.017366 1038356 retry.go:31] will retry after 288.297903ms: missing components: kube-dns
	I1120 22:28:19.317916 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:19.317956 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:28:19.317963 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.317970 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.317974 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.317979 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.317983 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.317987 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.317995 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:28:19.318009 1038356 retry.go:31] will retry after 387.681454ms: missing components: kube-dns
	I1120 22:28:19.711340 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:19.711374 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:28:19.711382 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:19.711388 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:19.711393 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:19.711398 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:19.711401 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:19.711411 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:19.711417 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 22:28:19.711431 1038356 retry.go:31] will retry after 439.187632ms: missing components: kube-dns
	I1120 22:28:20.214740 1038356 system_pods.go:86] 8 kube-system pods found
	I1120 22:28:20.214772 1038356 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Running
	I1120 22:28:20.214778 1038356 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running
	I1120 22:28:20.214783 1038356 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:28:20.214787 1038356 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running
	I1120 22:28:20.214792 1038356 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running
	I1120 22:28:20.214797 1038356 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:28:20.214801 1038356 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running
	I1120 22:28:20.214804 1038356 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Running
	I1120 22:28:20.214811 1038356 system_pods.go:126] duration metric: took 1.205126223s to wait for k8s-apps to be running ...
	I1120 22:28:20.214818 1038356 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:28:20.214872 1038356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:28:20.237045 1038356 system_svc.go:56] duration metric: took 22.216114ms WaitForService to wait for kubelet
	I1120 22:28:20.237071 1038356 kubeadm.go:587] duration metric: took 17.601722336s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:28:20.237090 1038356 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:28:20.249880 1038356 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:28:20.249909 1038356 node_conditions.go:123] node cpu capacity is 2
	I1120 22:28:20.249922 1038356 node_conditions.go:105] duration metric: took 12.825773ms to run NodePressure ...
	I1120 22:28:20.249934 1038356 start.go:242] waiting for startup goroutines ...
	I1120 22:28:20.249942 1038356 start.go:247] waiting for cluster config update ...
	I1120 22:28:20.249952 1038356 start.go:256] writing updated cluster config ...
	I1120 22:28:20.250241 1038356 ssh_runner.go:195] Run: rm -f paused
	I1120 22:28:20.254779 1038356 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:28:20.266794 1038356 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.274702 1038356 pod_ready.go:94] pod "coredns-66bc5c9577-6dbgj" is "Ready"
	I1120 22:28:20.274726 1038356 pod_ready.go:86] duration metric: took 7.908483ms for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.278017 1038356 pod_ready.go:83] waiting for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.285637 1038356 pod_ready.go:94] pod "etcd-no-preload-041029" is "Ready"
	I1120 22:28:20.285660 1038356 pod_ready.go:86] duration metric: took 7.62171ms for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.289274 1038356 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.299565 1038356 pod_ready.go:94] pod "kube-apiserver-no-preload-041029" is "Ready"
	I1120 22:28:20.299634 1038356 pod_ready.go:86] duration metric: took 10.333794ms for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.303953 1038356 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.661205 1038356 pod_ready.go:94] pod "kube-controller-manager-no-preload-041029" is "Ready"
	I1120 22:28:20.661282 1038356 pod_ready.go:86] duration metric: took 357.252156ms for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:20.860905 1038356 pod_ready.go:83] waiting for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.259972 1038356 pod_ready.go:94] pod "kube-proxy-n78zb" is "Ready"
	I1120 22:28:21.260000 1038356 pod_ready.go:86] duration metric: took 399.071073ms for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.461389 1038356 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.860332 1038356 pod_ready.go:94] pod "kube-scheduler-no-preload-041029" is "Ready"
	I1120 22:28:21.860358 1038356 pod_ready.go:86] duration metric: took 398.939928ms for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:28:21.860370 1038356 pod_ready.go:40] duration metric: took 1.605560127s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:28:21.916256 1038356 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:28:21.919813 1038356 out.go:179] * Done! kubectl is now configured to use "no-preload-041029" cluster and "default" namespace by default
	I1120 22:28:19.867117 1046058 out.go:252] * Restarting existing docker container for "newest-cni-135623" ...
	I1120 22:28:19.867221 1046058 cli_runner.go:164] Run: docker start newest-cni-135623
	I1120 22:28:20.167549 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:20.194360 1046058 kic.go:430] container "newest-cni-135623" state is running.
	I1120 22:28:20.194747 1046058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:28:20.231080 1046058 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/config.json ...
	I1120 22:28:20.231352 1046058 machine.go:94] provisionDockerMachine start ...
	I1120 22:28:20.231417 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:20.264515 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:20.269131 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:20.269155 1046058 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:28:20.270246 1046058 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:28:23.414799 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-135623
	
	I1120 22:28:23.414831 1046058 ubuntu.go:182] provisioning hostname "newest-cni-135623"
	I1120 22:28:23.414897 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:23.433748 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:23.434079 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:23.434094 1046058 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-135623 && echo "newest-cni-135623" | sudo tee /etc/hostname
	I1120 22:28:23.601694 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-135623
	
	I1120 22:28:23.601827 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:23.621179 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:23.621492 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:23.621514 1046058 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-135623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-135623/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-135623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:28:23.775228 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:28:23.775255 1046058 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:28:23.775302 1046058 ubuntu.go:190] setting up certificates
	I1120 22:28:23.775316 1046058 provision.go:84] configureAuth start
	I1120 22:28:23.775412 1046058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:28:23.792924 1046058 provision.go:143] copyHostCerts
	I1120 22:28:23.792997 1046058 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:28:23.793017 1046058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:28:23.793095 1046058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:28:23.793212 1046058 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:28:23.793226 1046058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:28:23.793255 1046058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:28:23.793312 1046058 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:28:23.793322 1046058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:28:23.793347 1046058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:28:23.793400 1046058 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.newest-cni-135623 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-135623]
	I1120 22:28:24.175067 1046058 provision.go:177] copyRemoteCerts
	I1120 22:28:24.175135 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:28:24.175185 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.195224 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:24.300104 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:28:24.321466 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 22:28:24.348586 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 22:28:24.371336 1046058 provision.go:87] duration metric: took 595.971597ms to configureAuth
	I1120 22:28:24.371364 1046058 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:28:24.371566 1046058 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:24.371675 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.391446 1046058 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:24.391762 1046058 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1120 22:28:24.391782 1046058 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:28:24.739459 1046058 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:28:24.739483 1046058 machine.go:97] duration metric: took 4.508119608s to provisionDockerMachine
	I1120 22:28:24.739495 1046058 start.go:293] postStartSetup for "newest-cni-135623" (driver="docker")
	I1120 22:28:24.739506 1046058 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:28:24.739587 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:28:24.739641 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.756979 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:24.860012 1046058 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:28:24.863669 1046058 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:28:24.863700 1046058 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:28:24.863712 1046058 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:28:24.863777 1046058 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:28:24.863878 1046058 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:28:24.863998 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:28:24.871985 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:24.890430 1046058 start.go:296] duration metric: took 150.918846ms for postStartSetup
	I1120 22:28:24.890571 1046058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:28:24.890616 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:24.908123 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:25.013420 1046058 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:28:25.019768 1046058 fix.go:56] duration metric: took 5.17320429s for fixHost
	I1120 22:28:25.019805 1046058 start.go:83] releasing machines lock for "newest-cni-135623", held for 5.173274428s
	I1120 22:28:25.019883 1046058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-135623
	I1120 22:28:25.040360 1046058 ssh_runner.go:195] Run: cat /version.json
	I1120 22:28:25.040420 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:25.040476 1046058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:28:25.040614 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:25.064095 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:25.071097 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:25.166635 1046058 ssh_runner.go:195] Run: systemctl --version
	I1120 22:28:25.263474 1046058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:28:25.301612 1046058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:28:25.305732 1046058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:28:25.305810 1046058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:28:25.313475 1046058 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:28:25.313550 1046058 start.go:496] detecting cgroup driver to use...
	I1120 22:28:25.313597 1046058 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:28:25.313651 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:28:25.328863 1046058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:28:25.342166 1046058 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:28:25.342229 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:28:25.358110 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:28:25.371853 1046058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:28:25.487091 1046058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:28:25.613512 1046058 docker.go:234] disabling docker service ...
	I1120 22:28:25.613595 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:28:25.630096 1046058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:28:25.645594 1046058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:28:25.776246 1046058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:28:25.888693 1046058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:28:25.901960 1046058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:28:25.917255 1046058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:28:25.917377 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.927084 1046058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:28:25.927198 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.936187 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.944988 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.953615 1046058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:28:25.961745 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.971413 1046058 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.980044 1046058 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:25.988745 1046058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:28:25.996452 1046058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:28:26.004915 1046058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:26.122045 1046058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:28:26.307050 1046058 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:28:26.307196 1046058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:28:26.311586 1046058 start.go:564] Will wait 60s for crictl version
	I1120 22:28:26.311707 1046058 ssh_runner.go:195] Run: which crictl
	I1120 22:28:26.315838 1046058 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:28:26.343825 1046058 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:28:26.344002 1046058 ssh_runner.go:195] Run: crio --version
	I1120 22:28:26.372720 1046058 ssh_runner.go:195] Run: crio --version
	I1120 22:28:26.405777 1046058 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:28:26.408743 1046058 cli_runner.go:164] Run: docker network inspect newest-cni-135623 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:28:26.425613 1046058 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 22:28:26.429809 1046058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:26.443060 1046058 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 22:28:26.445993 1046058 kubeadm.go:884] updating cluster {Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:28:26.446166 1046058 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:26.446252 1046058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:26.484434 1046058 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:26.484459 1046058 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:28:26.484521 1046058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:26.510217 1046058 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:26.510243 1046058 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:28:26.510251 1046058 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1120 22:28:26.510396 1046058 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-135623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:28:26.510527 1046058 ssh_runner.go:195] Run: crio config
	I1120 22:28:26.590324 1046058 cni.go:84] Creating CNI manager for ""
	I1120 22:28:26.590350 1046058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:26.590372 1046058 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 22:28:26.592701 1046058 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-135623 NodeName:newest-cni-135623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:28:26.592862 1046058 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-135623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:28:26.592938 1046058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:28:26.608056 1046058 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:28:26.608135 1046058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:28:26.616237 1046058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1120 22:28:26.629637 1046058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:28:26.642733 1046058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1120 22:28:26.655998 1046058 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:28:26.659708 1046058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:26.677753 1046058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:26.801819 1046058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:26.819744 1046058 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623 for IP: 192.168.76.2
	I1120 22:28:26.819766 1046058 certs.go:195] generating shared ca certs ...
	I1120 22:28:26.819783 1046058 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:26.819916 1046058 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:28:26.819968 1046058 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:28:26.819981 1046058 certs.go:257] generating profile certs ...
	I1120 22:28:26.820068 1046058 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/client.key
	I1120 22:28:26.820138 1046058 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key.0fed1dd1
	I1120 22:28:26.820212 1046058 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key
	I1120 22:28:26.820326 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:28:26.820361 1046058 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:28:26.820373 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:28:26.820398 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:28:26.820424 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:28:26.820447 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:28:26.820499 1046058 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:26.821136 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:28:26.845858 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:28:26.866347 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:28:26.890289 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:28:26.915043 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 22:28:26.948865 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:28:26.989139 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:28:27.013019 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/newest-cni-135623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:28:27.042486 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:28:27.063401 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:28:27.083564 1046058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:28:27.102595 1046058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:28:27.115311 1046058 ssh_runner.go:195] Run: openssl version
	I1120 22:28:27.122767 1046058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.133554 1046058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:28:27.142802 1046058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.147578 1046058 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.147652 1046058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:27.190059 1046058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:28:27.198365 1046058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.205986 1046058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:28:27.213966 1046058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.217915 1046058 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.217991 1046058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:28:27.260086 1046058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:28:27.268008 1046058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.275695 1046058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:28:27.283799 1046058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.287828 1046058 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.287937 1046058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:28:27.329873 1046058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:28:27.337431 1046058 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:28:27.341283 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:28:27.382524 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:28:27.424356 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:28:27.475683 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:28:27.527122 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:28:27.595186 1046058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:28:27.659874 1046058 kubeadm.go:401] StartCluster: {Name:newest-cni-135623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-135623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:27.660023 1046058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:28:27.660125 1046058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:28:27.731473 1046058 cri.go:89] found id: "994060783e1c97d7c1c09724f225c297f94952fd74555ef5c60df0c2669377d3"
	I1120 22:28:27.731540 1046058 cri.go:89] found id: "059409635a2cb5c5a2351453976d3a7badf182fd048d97402160335d0f15c448"
	I1120 22:28:27.731559 1046058 cri.go:89] found id: "c4c11b2d5f9de615c1362209a3d4e356df8a02d81b014351af5ee3d564d65f59"
	I1120 22:28:27.731580 1046058 cri.go:89] found id: ""
	I1120 22:28:27.731684 1046058 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:28:27.759544 1046058 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:27Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:28:27.759694 1046058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:28:27.776624 1046058 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:28:27.776687 1046058 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:28:27.776793 1046058 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:28:27.790113 1046058 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:28:27.790746 1046058 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-135623" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:27.791071 1046058 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-135623" cluster setting kubeconfig missing "newest-cni-135623" context setting]
	I1120 22:28:27.791595 1046058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:27.793271 1046058 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:28:27.803864 1046058 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1120 22:28:27.803937 1046058 kubeadm.go:602] duration metric: took 27.22293ms to restartPrimaryControlPlane
	I1120 22:28:27.803960 1046058 kubeadm.go:403] duration metric: took 144.09676ms to StartCluster
	I1120 22:28:27.804005 1046058 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:27.804084 1046058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:27.805018 1046058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:27.805290 1046058 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:28:27.805671 1046058 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:28:27.805740 1046058 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-135623"
	I1120 22:28:27.805754 1046058 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-135623"
	W1120 22:28:27.805760 1046058 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:28:27.805781 1046058 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:27.806246 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.806640 1046058 config.go:182] Loaded profile config "newest-cni-135623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:27.806715 1046058 addons.go:70] Setting dashboard=true in profile "newest-cni-135623"
	I1120 22:28:27.806754 1046058 addons.go:239] Setting addon dashboard=true in "newest-cni-135623"
	W1120 22:28:27.806779 1046058 addons.go:248] addon dashboard should already be in state true
	I1120 22:28:27.806816 1046058 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:27.807269 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.809253 1046058 addons.go:70] Setting default-storageclass=true in profile "newest-cni-135623"
	I1120 22:28:27.809286 1046058 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-135623"
	I1120 22:28:27.809631 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.817886 1046058 out.go:179] * Verifying Kubernetes components...
	I1120 22:28:27.821328 1046058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:27.868711 1046058 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:28:27.868809 1046058 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:28:27.872790 1046058 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:28:27.872909 1046058 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:27.872920 1046058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:28:27.872988 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:27.875945 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:28:27.875972 1046058 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:28:27.876044 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:27.877435 1046058 addons.go:239] Setting addon default-storageclass=true in "newest-cni-135623"
	W1120 22:28:27.877465 1046058 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:28:27.877492 1046058 host.go:66] Checking if "newest-cni-135623" exists ...
	I1120 22:28:27.877947 1046058 cli_runner.go:164] Run: docker container inspect newest-cni-135623 --format={{.State.Status}}
	I1120 22:28:27.921797 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:27.940671 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:27.947918 1046058 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:27.947941 1046058 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:28:27.948007 1046058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-135623
	I1120 22:28:27.980681 1046058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/newest-cni-135623/id_rsa Username:docker}
	I1120 22:28:28.169240 1046058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:28.211836 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:28:28.211859 1046058 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:28:28.212433 1046058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:28.244789 1046058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:28.316905 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:28:28.316932 1046058 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:28:28.383041 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:28:28.383070 1046058 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:28:28.476984 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:28:28.477008 1046058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:28:28.504663 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:28:28.504709 1046058 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:28:28.527249 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:28:28.527276 1046058 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:28:28.548629 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:28:28.548669 1046058 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:28:28.569841 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:28:28.569869 1046058 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:28:28.588156 1046058 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:28:28.588203 1046058 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:28:28.611754 1046058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:28:34.884888 1046058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.715562649s)
	I1120 22:28:34.884933 1046058 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.672485041s)
	I1120 22:28:34.884971 1046058 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:28:34.885028 1046058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:28:34.885101 1046058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.6402453s)
	I1120 22:28:34.885455 1046058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.273653897s)
	I1120 22:28:34.888553 1046058 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-135623 addons enable metrics-server
	
	I1120 22:28:34.904008 1046058 api_server.go:72] duration metric: took 7.09865782s to wait for apiserver process to appear ...
	I1120 22:28:34.904079 1046058 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:28:34.904114 1046058 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1120 22:28:34.914071 1046058 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1120 22:28:34.915150 1046058 api_server.go:141] control plane version: v1.34.1
	I1120 22:28:34.915180 1046058 api_server.go:131] duration metric: took 11.07846ms to wait for apiserver health ...
	I1120 22:28:34.915190 1046058 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:28:34.916538 1046058 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1120 22:28:34.918689 1046058 system_pods.go:59] 8 kube-system pods found
	I1120 22:28:34.918728 1046058 system_pods.go:61] "coredns-66bc5c9577-9flb9" [3dc2f756-6d87-4c6c-a277-f78afd3dee9d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 22:28:34.918770 1046058 system_pods.go:61] "etcd-newest-cni-135623" [0de7f3f2-008e-4d81-9d64-817f1d6baac9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:28:34.918786 1046058 system_pods.go:61] "kindnet-qnvsk" [f7a38583-b1d7-4129-ad46-dd3ccb7319eb] Running
	I1120 22:28:34.918794 1046058 system_pods.go:61] "kube-apiserver-newest-cni-135623" [d04f855f-e0d5-4f66-8479-486e7801a0c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:28:34.918803 1046058 system_pods.go:61] "kube-controller-manager-newest-cni-135623" [216bbe7c-632b-4b80-bc44-3198afcc3979] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:28:34.918812 1046058 system_pods.go:61] "kube-proxy-8cqbf" [0c0b8be5-8252-4341-b19a-5270b86a2b1d] Running
	I1120 22:28:34.918856 1046058 system_pods.go:61] "kube-scheduler-newest-cni-135623" [8d3fed71-fe6a-4425-ad2d-c37cd0c2de1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:28:34.918868 1046058 system_pods.go:61] "storage-provisioner" [21cbba0f-bc0e-4982-a846-6b4daa0506ba] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 22:28:34.918876 1046058 system_pods.go:74] duration metric: took 3.680252ms to wait for pod list to return data ...
	I1120 22:28:34.918893 1046058 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:28:34.919479 1046058 addons.go:515] duration metric: took 7.113790518s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 22:28:34.921743 1046058 default_sa.go:45] found service account: "default"
	I1120 22:28:34.921770 1046058 default_sa.go:55] duration metric: took 2.870905ms for default service account to be created ...
	I1120 22:28:34.921783 1046058 kubeadm.go:587] duration metric: took 7.116439891s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 22:28:34.921801 1046058 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:28:34.924646 1046058 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:28:34.924680 1046058 node_conditions.go:123] node cpu capacity is 2
	I1120 22:28:34.924692 1046058 node_conditions.go:105] duration metric: took 2.885649ms to run NodePressure ...
	I1120 22:28:34.924705 1046058 start.go:242] waiting for startup goroutines ...
	I1120 22:28:34.924713 1046058 start.go:247] waiting for cluster config update ...
	I1120 22:28:34.924725 1046058 start.go:256] writing updated cluster config ...
	I1120 22:28:34.925037 1046058 ssh_runner.go:195] Run: rm -f paused
	I1120 22:28:35.005977 1046058 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:28:35.009373 1046058 out.go:179] * Done! kubectl is now configured to use "newest-cni-135623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.255924235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.27713583Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-8cqbf/POD" id=a62e3f45-8587-42d6-a555-3b1efb5923e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.277424474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.310914045Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a62e3f45-8587-42d6-a555-3b1efb5923e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.313913985Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=81ecb272-1949-4222-9942-5d43e9101799 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.331956867Z" level=info msg="Ran pod sandbox 31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2 with infra container: kube-system/kube-proxy-8cqbf/POD" id=a62e3f45-8587-42d6-a555-3b1efb5923e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.33322995Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=76b7fba9-a07c-4e77-845d-da8108caebb9 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.340238828Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3087d229-87c9-4d6e-a514-36596b1d8bc3 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.342391217Z" level=info msg="Ran pod sandbox 096f426d57e93a46ceeb1b38363bc3d80bedebabfd3ba31b30171c91bb7da929 with infra container: kube-system/kindnet-qnvsk/POD" id=81ecb272-1949-4222-9942-5d43e9101799 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.350922879Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=77bba886-d5cc-4979-b199-f62505c4d9ba name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.358055451Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=34f70d43-7cb5-44e5-b072-59b9e8b5a4f3 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.35857092Z" level=info msg="Creating container: kube-system/kube-proxy-8cqbf/kube-proxy" id=8ad7fce9-d700-4388-8b2e-4a44408a9fcb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.359428695Z" level=info msg="Creating container: kube-system/kindnet-qnvsk/kindnet-cni" id=f2210141-dc26-4a52-b077-546c1fd59103 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.359526362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.364311178Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.370514573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.376797575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.388152863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.396011081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.438268838Z" level=info msg="Created container 2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623: kube-system/kindnet-qnvsk/kindnet-cni" id=f2210141-dc26-4a52-b077-546c1fd59103 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.440519837Z" level=info msg="Starting container: 2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623" id=2f04697b-4b97-4c39-94f8-298cec6643d7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.44351122Z" level=info msg="Started container" PID=1070 containerID=2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623 description=kube-system/kindnet-qnvsk/kindnet-cni id=2f04697b-4b97-4c39-94f8-298cec6643d7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=096f426d57e93a46ceeb1b38363bc3d80bedebabfd3ba31b30171c91bb7da929
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.491414325Z" level=info msg="Created container e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d: kube-system/kube-proxy-8cqbf/kube-proxy" id=8ad7fce9-d700-4388-8b2e-4a44408a9fcb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.492308228Z" level=info msg="Starting container: e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d" id=3fee2c28-05df-427e-b9b1-d3e6b9de38b2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:28:33 newest-cni-135623 crio[615]: time="2025-11-20T22:28:33.495587097Z" level=info msg="Started container" PID=1074 containerID=e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d description=kube-system/kube-proxy-8cqbf/kube-proxy id=3fee2c28-05df-427e-b9b1-d3e6b9de38b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e5f4c321d3229       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   31d3a867b12c8       kube-proxy-8cqbf                            kube-system
	2111474ae1614       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   096f426d57e93       kindnet-qnvsk                               kube-system
	426da4579a571       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   db7d7cd74d689       kube-controller-manager-newest-cni-135623   kube-system
	994060783e1c9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   92cc01437d438       etcd-newest-cni-135623                      kube-system
	059409635a2cb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   154129792c860       kube-scheduler-newest-cni-135623            kube-system
	c4c11b2d5f9de       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   cb7138fad6c3e       kube-apiserver-newest-cni-135623            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-135623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-135623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=newest-cni-135623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_28_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:28:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-135623
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:28:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:28:32 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:28:32 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:28:32 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 20 Nov 2025 22:28:32 +0000   Thu, 20 Nov 2025 22:28:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-135623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                04e07bd9-c8a6-4d46-86ba-5a3653e3028d
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-135623                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-qnvsk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-135623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-135623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-8cqbf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-135623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 42s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 42s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node newest-cni-135623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-135623 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-135623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-135623 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-135623 event: Registered Node newest-cni-135623 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-135623 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-135623 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-135623 event: Registered Node newest-cni-135623 in Controller
	
	
	==> dmesg <==
	[ +24.640666] overlayfs: idmapped layers are currently not supported
	[Nov20 22:06] overlayfs: idmapped layers are currently not supported
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	[Nov20 22:27] overlayfs: idmapped layers are currently not supported
	[ +44.355242] overlayfs: idmapped layers are currently not supported
	[Nov20 22:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [994060783e1c97d7c1c09724f225c297f94952fd74555ef5c60df0c2669377d3] <==
	{"level":"warn","ts":"2025-11-20T22:28:30.323242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.387342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.484476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.486146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.515346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.532013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.562687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.579878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.594156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.624946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.643218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.677575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.706657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.741715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.778535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.804540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.843449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.885166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.915361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.935304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:30.997923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:31.031846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:31.068393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:31.110285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:28:31.227049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34958","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:28:41 up  5:10,  0 user,  load average: 5.64, 4.10, 3.04
	Linux newest-cni-135623 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2111474ae16143b7e18dde9a72a00fac49339f04cb75b375bd409be9015d1623] <==
	I1120 22:28:33.720631       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:28:33.737953       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 22:28:33.738137       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:28:33.738152       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:28:33.738169       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:28:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:28:33.915494       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:28:33.923482       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:28:33.923515       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:28:33.924018       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [c4c11b2d5f9de615c1362209a3d4e356df8a02d81b014351af5ee3d564d65f59] <==
	I1120 22:28:32.890650       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:28:32.893447       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:28:32.893646       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 22:28:32.893705       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 22:28:32.911405       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 22:28:32.911477       1 aggregator.go:171] initial CRD sync complete...
	I1120 22:28:32.911488       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 22:28:32.911495       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 22:28:32.911500       1 cache.go:39] Caches are synced for autoregister controller
	I1120 22:28:32.914334       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 22:28:32.915711       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 22:28:32.950283       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1120 22:28:32.966832       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 22:28:33.051797       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:28:33.225980       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:28:34.565625       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:28:34.669323       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:28:34.712690       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:28:34.731388       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:28:34.819366       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.61.167"}
	I1120 22:28:34.836706       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.219.112"}
	I1120 22:28:37.141530       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:28:37.293079       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 22:28:37.342213       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:28:37.446278       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [426da4579a571a9ffcb380b31c748bfb7455704b87ed67ee995cb8979390b132] <==
	I1120 22:28:36.886353       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 22:28:36.886597       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 22:28:36.893078       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 22:28:36.902712       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:28:36.911160       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 22:28:36.914719       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:28:36.914769       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:28:36.914778       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:28:36.915729       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 22:28:36.917310       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 22:28:36.922888       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1120 22:28:36.927255       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 22:28:36.934766       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 22:28:36.936033       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:28:36.936044       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 22:28:36.936078       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:28:36.937381       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 22:28:36.937546       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:28:36.937561       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:28:36.936093       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 22:28:36.936103       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 22:28:36.939851       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:28:36.940962       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 22:28:36.943260       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 22:28:36.943263       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	
	
	==> kube-proxy [e5f4c321d322999a8629597f7e1933fd7bceb5bedd7b32b5442fdcb07af6ef0d] <==
	I1120 22:28:34.204390       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:28:34.486205       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:28:34.592980       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:28:34.593018       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 22:28:34.593106       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:28:34.811585       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:28:34.811649       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:28:34.844258       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:28:34.844579       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:28:34.844602       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:28:34.846127       1 config.go:200] "Starting service config controller"
	I1120 22:28:34.846149       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:28:34.846166       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:28:34.846172       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:28:34.846205       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:28:34.846209       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:28:34.852085       1 config.go:309] "Starting node config controller"
	I1120 22:28:34.852111       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:28:34.852120       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:28:34.946877       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:28:34.946937       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:28:34.947038       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [059409635a2cb5c5a2351453976d3a7badf182fd048d97402160335d0f15c448] <==
	I1120 22:28:32.424932       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:28:32.432315       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:28:32.432439       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:28:32.435853       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:28:32.435931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 22:28:32.465476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 22:28:32.465739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 22:28:32.465881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 22:28:32.466030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 22:28:32.466239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 22:28:32.466371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 22:28:32.466475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 22:28:32.466582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 22:28:32.466719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 22:28:32.466870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 22:28:32.467013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 22:28:32.467386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 22:28:32.467501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 22:28:32.467574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 22:28:32.467632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 22:28:32.472487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 22:28:32.472704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 22:28:32.479798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 22:28:32.480040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1120 22:28:34.150888       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.924134     734 apiserver.go:52] "Watching apiserver"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.943351     734 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.971176     734 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-135623"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.971283     734 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-135623"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.971326     734 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.972211     734 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: E1120 22:28:32.982298     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-135623\" already exists" pod="kube-system/etcd-newest-cni-135623"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: I1120 22:28:32.988171     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-135623"
	Nov 20 22:28:32 newest-cni-135623 kubelet[734]: E1120 22:28:32.988115     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-135623\" already exists" pod="kube-system/kube-scheduler-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.030928     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c0b8be5-8252-4341-b19a-5270b86a2b1d-xtables-lock\") pod \"kube-proxy-8cqbf\" (UID: \"0c0b8be5-8252-4341-b19a-5270b86a2b1d\") " pod="kube-system/kube-proxy-8cqbf"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031018     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-cni-cfg\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031042     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-lib-modules\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031069     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c0b8be5-8252-4341-b19a-5270b86a2b1d-lib-modules\") pod \"kube-proxy-8cqbf\" (UID: \"0c0b8be5-8252-4341-b19a-5270b86a2b1d\") " pod="kube-system/kube-proxy-8cqbf"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031091     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a38583-b1d7-4129-ad46-dd3ccb7319eb-xtables-lock\") pod \"kindnet-qnvsk\" (UID: \"f7a38583-b1d7-4129-ad46-dd3ccb7319eb\") " pod="kube-system/kindnet-qnvsk"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: E1120 22:28:33.031968     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-135623\" already exists" pod="kube-system/kube-apiserver-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.031990     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: E1120 22:28:33.071928     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-135623\" already exists" pod="kube-system/kube-controller-manager-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.072387     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: I1120 22:28:33.086185     734 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: E1120 22:28:33.112210     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-135623\" already exists" pod="kube-system/kube-scheduler-newest-cni-135623"
	Nov 20 22:28:33 newest-cni-135623 kubelet[734]: W1120 22:28:33.324702     734 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/22d262387b8b3477bbf7bf91735ad1bc7694c5c020a090c247af676ae961d084/crio-31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2 WatchSource:0}: Error finding container 31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2: Status 404 returned error can't find the container with id 31d3a867b12c8f3b3b91a63b991fea0b23e9fbcbe50c735eff35012a69359fa2
	Nov 20 22:28:36 newest-cni-135623 kubelet[734]: I1120 22:28:36.365002     734 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 20 22:28:36 newest-cni-135623 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:28:36 newest-cni-135623 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:28:36 newest-cni-135623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-135623 -n newest-cni-135623
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-135623 -n newest-cni-135623: exit status 2 (377.423385ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-135623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-9flb9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gc8j2 kubernetes-dashboard-855c9754f9-qzzhv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-135623 describe pod coredns-66bc5c9577-9flb9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gc8j2 kubernetes-dashboard-855c9754f9-qzzhv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-135623 describe pod coredns-66bc5c9577-9flb9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gc8j2 kubernetes-dashboard-855c9754f9-qzzhv: exit status 1 (91.231038ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-9flb9" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-gc8j2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qzzhv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-135623 describe pod coredns-66bc5c9577-9flb9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gc8j2 kubernetes-dashboard-855c9754f9-qzzhv: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-041029 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-041029 --alsologtostderr -v=1: exit status 80 (2.100588375s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-041029 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 22:30:01.628928 1054654 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:30:01.629192 1054654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:30:01.629223 1054654 out.go:374] Setting ErrFile to fd 2...
	I1120 22:30:01.629241 1054654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:30:01.629593 1054654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:30:01.629940 1054654 out.go:368] Setting JSON to false
	I1120 22:30:01.629998 1054654 mustload.go:66] Loading cluster: no-preload-041029
	I1120 22:30:01.630457 1054654 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:30:01.631194 1054654 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:30:01.650940 1054654 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:30:01.651520 1054654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:30:01.728065 1054654 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 22:30:01.717618314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:30:01.728933 1054654 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-041029 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 22:30:01.732605 1054654 out.go:179] * Pausing node no-preload-041029 ... 
	I1120 22:30:01.735641 1054654 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:30:01.736043 1054654 ssh_runner.go:195] Run: systemctl --version
	I1120 22:30:01.736098 1054654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:30:01.755345 1054654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:30:01.858311 1054654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:30:01.873667 1054654 pause.go:52] kubelet running: true
	I1120 22:30:01.873756 1054654 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:30:02.134556 1054654 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:30:02.134661 1054654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:30:02.209924 1054654 cri.go:89] found id: "41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf"
	I1120 22:30:02.209966 1054654 cri.go:89] found id: "47eef4f0b9636eb9f49ce7cfceedd7b832747ca4656d77970e8755154fc7ac35"
	I1120 22:30:02.209971 1054654 cri.go:89] found id: "a6f77ff04e1d67a44bd587841792b8215abd9c076d0500109bc25fc0c3307090"
	I1120 22:30:02.209976 1054654 cri.go:89] found id: "e3ff002bcd2e24647b6415e521297e2309e2f39cdf9a3f07226779379f304671"
	I1120 22:30:02.209979 1054654 cri.go:89] found id: "da42598cf8490287fd97dafd07a73f5eaa9f8fa0e2bcbe2f23c4598aaec33417"
	I1120 22:30:02.209983 1054654 cri.go:89] found id: "e42bdea342f42392b071351be610744a76403aa1460991517dc30c6622b12fab"
	I1120 22:30:02.209986 1054654 cri.go:89] found id: "0962480e895b00f5e5f7566371faa096c72149db953c264531067463575412d0"
	I1120 22:30:02.209989 1054654 cri.go:89] found id: "f023b4b884cd598958f1afa19540045fe5a0c2be9cb914f11b375b8788914863"
	I1120 22:30:02.209993 1054654 cri.go:89] found id: "1ed9b7cf8d08106500bd207cf6aeb94655fa86b8f7e5a5e12ea8481115f296b6"
	I1120 22:30:02.210000 1054654 cri.go:89] found id: "203bde87ce2b03a82b4c50019e0edb462ab301d6858878f3f25a66a9194a2b97"
	I1120 22:30:02.210003 1054654 cri.go:89] found id: "d7207e0f6514d7dd0cc35630dc0c8be98fda4a396f91d7842768b91e9cf4adf1"
	I1120 22:30:02.210007 1054654 cri.go:89] found id: ""
	I1120 22:30:02.210059 1054654 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:30:02.230737 1054654 retry.go:31] will retry after 317.406264ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:30:02Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:30:02.549346 1054654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:30:02.564220 1054654 pause.go:52] kubelet running: false
	I1120 22:30:02.564323 1054654 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:30:02.743567 1054654 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:30:02.743653 1054654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:30:02.815181 1054654 cri.go:89] found id: "41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf"
	I1120 22:30:02.815209 1054654 cri.go:89] found id: "47eef4f0b9636eb9f49ce7cfceedd7b832747ca4656d77970e8755154fc7ac35"
	I1120 22:30:02.815214 1054654 cri.go:89] found id: "a6f77ff04e1d67a44bd587841792b8215abd9c076d0500109bc25fc0c3307090"
	I1120 22:30:02.815219 1054654 cri.go:89] found id: "e3ff002bcd2e24647b6415e521297e2309e2f39cdf9a3f07226779379f304671"
	I1120 22:30:02.815223 1054654 cri.go:89] found id: "da42598cf8490287fd97dafd07a73f5eaa9f8fa0e2bcbe2f23c4598aaec33417"
	I1120 22:30:02.815227 1054654 cri.go:89] found id: "e42bdea342f42392b071351be610744a76403aa1460991517dc30c6622b12fab"
	I1120 22:30:02.815230 1054654 cri.go:89] found id: "0962480e895b00f5e5f7566371faa096c72149db953c264531067463575412d0"
	I1120 22:30:02.815234 1054654 cri.go:89] found id: "f023b4b884cd598958f1afa19540045fe5a0c2be9cb914f11b375b8788914863"
	I1120 22:30:02.815237 1054654 cri.go:89] found id: "1ed9b7cf8d08106500bd207cf6aeb94655fa86b8f7e5a5e12ea8481115f296b6"
	I1120 22:30:02.815244 1054654 cri.go:89] found id: "203bde87ce2b03a82b4c50019e0edb462ab301d6858878f3f25a66a9194a2b97"
	I1120 22:30:02.815249 1054654 cri.go:89] found id: "d7207e0f6514d7dd0cc35630dc0c8be98fda4a396f91d7842768b91e9cf4adf1"
	I1120 22:30:02.815252 1054654 cri.go:89] found id: ""
	I1120 22:30:02.815304 1054654 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:30:02.827180 1054654 retry.go:31] will retry after 535.793727ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:30:02Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:30:03.363991 1054654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:30:03.379011 1054654 pause.go:52] kubelet running: false
	I1120 22:30:03.379121 1054654 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 22:30:03.554288 1054654 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 22:30:03.554398 1054654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 22:30:03.641123 1054654 cri.go:89] found id: "41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf"
	I1120 22:30:03.641151 1054654 cri.go:89] found id: "47eef4f0b9636eb9f49ce7cfceedd7b832747ca4656d77970e8755154fc7ac35"
	I1120 22:30:03.641156 1054654 cri.go:89] found id: "a6f77ff04e1d67a44bd587841792b8215abd9c076d0500109bc25fc0c3307090"
	I1120 22:30:03.641159 1054654 cri.go:89] found id: "e3ff002bcd2e24647b6415e521297e2309e2f39cdf9a3f07226779379f304671"
	I1120 22:30:03.641162 1054654 cri.go:89] found id: "da42598cf8490287fd97dafd07a73f5eaa9f8fa0e2bcbe2f23c4598aaec33417"
	I1120 22:30:03.641166 1054654 cri.go:89] found id: "e42bdea342f42392b071351be610744a76403aa1460991517dc30c6622b12fab"
	I1120 22:30:03.641169 1054654 cri.go:89] found id: "0962480e895b00f5e5f7566371faa096c72149db953c264531067463575412d0"
	I1120 22:30:03.641172 1054654 cri.go:89] found id: "f023b4b884cd598958f1afa19540045fe5a0c2be9cb914f11b375b8788914863"
	I1120 22:30:03.641175 1054654 cri.go:89] found id: "1ed9b7cf8d08106500bd207cf6aeb94655fa86b8f7e5a5e12ea8481115f296b6"
	I1120 22:30:03.641181 1054654 cri.go:89] found id: "203bde87ce2b03a82b4c50019e0edb462ab301d6858878f3f25a66a9194a2b97"
	I1120 22:30:03.641188 1054654 cri.go:89] found id: "d7207e0f6514d7dd0cc35630dc0c8be98fda4a396f91d7842768b91e9cf4adf1"
	I1120 22:30:03.641192 1054654 cri.go:89] found id: ""
	I1120 22:30:03.641277 1054654 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 22:30:03.656247 1054654 out.go:203] 
	W1120 22:30:03.659077 1054654 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:30:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:30:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 22:30:03.659140 1054654 out.go:285] * 
	* 
	W1120 22:30:03.668805 1054654 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 22:30:03.671992 1054654 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-041029 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-041029
helpers_test.go:243: (dbg) docker inspect no-preload-041029:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71",
	        "Created": "2025-11-20T22:27:06.220478605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1050459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:28:47.283890474Z",
	            "FinishedAt": "2025-11-20T22:28:45.986274129Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/hostname",
	        "HostsPath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/hosts",
	        "LogPath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71-json.log",
	        "Name": "/no-preload-041029",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-041029:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-041029",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71",
	                "LowerDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-041029",
	                "Source": "/var/lib/docker/volumes/no-preload-041029/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-041029",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-041029",
	                "name.minikube.sigs.k8s.io": "no-preload-041029",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18588ba00177f5556f1d5ced3d847ab4a70cf86f42046bee341cb697a4e056a0",
	            "SandboxKey": "/var/run/docker/netns/18588ba00177",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34202"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34203"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34204"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-041029": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:ff:7a:0d:3f:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d249c184d92c757ccd210aec69d5acdf56f64a6ec2365db3e9108375c30dd5a",
	                    "EndpointID": "5f79e2d32b684030348019203eb6174025ca751651cb28f8fa499b42c2d5f37e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-041029",
	                        "8049b6a31f79"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041029 -n no-preload-041029
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041029 -n no-preload-041029: exit status 2 (378.061479ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-041029 logs -n 25
E1120 22:30:05.211199  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-041029 logs -n 25: (1.426250787s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p disable-driver-mounts-305138                                                                                                                                                                                                               │ disable-driver-mounts-305138 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ image   │ embed-certs-270206 image list --format=json                                                                                                                                                                                                   │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ pause   │ -p embed-certs-270206 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-135623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ stop    │ -p newest-cni-135623 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-135623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-041029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ stop    │ -p no-preload-041029 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ image   │ newest-cni-135623 image list --format=json                                                                                                                                                                                                    │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ pause   │ -p newest-cni-135623 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ delete  │ -p newest-cni-135623                                                                                                                                                                                                                          │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ delete  │ -p newest-cni-135623                                                                                                                                                                                                                          │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ start   │ -p auto-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-640880                  │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-041029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:29 UTC │
	│ image   │ no-preload-041029 image list --format=json                                                                                                                                                                                                    │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:30 UTC │ 20 Nov 25 22:30 UTC │
	│ pause   │ -p no-preload-041029 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:28:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:28:46.875585 1050333 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:28:46.875809 1050333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:46.875832 1050333 out.go:374] Setting ErrFile to fd 2...
	I1120 22:28:46.875850 1050333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:46.876127 1050333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:28:46.876522 1050333 out.go:368] Setting JSON to false
	I1120 22:28:46.877443 1050333 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18652,"bootTime":1763659075,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:28:46.877529 1050333 start.go:143] virtualization:  
	I1120 22:28:46.881702 1050333 out.go:179] * [no-preload-041029] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:28:46.886132 1050333 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:28:46.886213 1050333 notify.go:221] Checking for updates...
	I1120 22:28:46.899044 1050333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:28:46.902342 1050333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:46.905629 1050333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:28:46.908787 1050333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:28:46.911923 1050333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:28:46.915519 1050333 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:46.916186 1050333 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:28:46.973963 1050333 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:28:46.974085 1050333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:47.069142 1050333 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-11-20 22:28:47.058765989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:47.069254 1050333 docker.go:319] overlay module found
	I1120 22:28:47.074853 1050333 out.go:179] * Using the docker driver based on existing profile
	I1120 22:28:47.078073 1050333 start.go:309] selected driver: docker
	I1120 22:28:47.078106 1050333 start.go:930] validating driver "docker" against &{Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:47.078206 1050333 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:28:47.078959 1050333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:47.186042 1050333 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-11-20 22:28:47.176730196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:47.186390 1050333 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:28:47.186424 1050333 cni.go:84] Creating CNI manager for ""
	I1120 22:28:47.186478 1050333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:47.186529 1050333 start.go:353] cluster config:
	{Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:47.189860 1050333 out.go:179] * Starting "no-preload-041029" primary control-plane node in "no-preload-041029" cluster
	I1120 22:28:47.195030 1050333 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:28:47.198022 1050333 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:28:47.200955 1050333 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:47.201056 1050333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:28:47.201096 1050333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json ...
	I1120 22:28:47.201413 1050333 cache.go:107] acquiring lock: {Name:mkfe8a3234fd2567b981ed2e943c252800f37788 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201498 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1120 22:28:47.201510 1050333 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.622µs
	I1120 22:28:47.201518 1050333 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1120 22:28:47.201531 1050333 cache.go:107] acquiring lock: {Name:mk5ddbac06bb4c58e0829e32dc3cac2e0f3d3484 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201569 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1120 22:28:47.201579 1050333 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 50.487µs
	I1120 22:28:47.201586 1050333 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1120 22:28:47.201596 1050333 cache.go:107] acquiring lock: {Name:mk6473ff5661413ee7b260344002f555ac817d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201628 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1120 22:28:47.201637 1050333 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.437µs
	I1120 22:28:47.201647 1050333 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1120 22:28:47.201657 1050333 cache.go:107] acquiring lock: {Name:mk452c1826f4ea2a7476e6cd709c98ef1de14eae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201687 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1120 22:28:47.201695 1050333 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 39.025µs
	I1120 22:28:47.201706 1050333 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1120 22:28:47.201716 1050333 cache.go:107] acquiring lock: {Name:mkc179cc367be18f686b3ff0d25d7c0a4d38107a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201745 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1120 22:28:47.201755 1050333 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 40.042µs
	I1120 22:28:47.201761 1050333 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1120 22:28:47.201770 1050333 cache.go:107] acquiring lock: {Name:mk2d31e05763b1401b87a3347e71140539ad5cd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201800 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1120 22:28:47.201809 1050333 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.082µs
	I1120 22:28:47.201815 1050333 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1120 22:28:47.201825 1050333 cache.go:107] acquiring lock: {Name:mk1e9e4e31f0a8424c64380df7184f5c5bff61db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201856 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1120 22:28:47.201863 1050333 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.869µs
	I1120 22:28:47.201873 1050333 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1120 22:28:47.201882 1050333 cache.go:107] acquiring lock: {Name:mk7bd038abefa117c730983c9f9ea84fc4100cef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201913 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1120 22:28:47.201923 1050333 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 41.674µs
	I1120 22:28:47.201929 1050333 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1120 22:28:47.201935 1050333 cache.go:87] Successfully saved all images to host disk.
	I1120 22:28:47.222473 1050333 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:28:47.222494 1050333 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:28:47.222507 1050333 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:28:47.222531 1050333 start.go:360] acquireMachinesLock for no-preload-041029: {Name:mk272b44e31f3ea0985bee4020b0ba7b3af4d70d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.222603 1050333 start.go:364] duration metric: took 57.675µs to acquireMachinesLock for "no-preload-041029"
	I1120 22:28:47.222624 1050333 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:28:47.222630 1050333 fix.go:54] fixHost starting: 
	I1120 22:28:47.222889 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:47.247452 1050333 fix.go:112] recreateIfNeeded on no-preload-041029: state=Stopped err=<nil>
	W1120 22:28:47.247483 1050333 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 22:28:44.861026 1049903 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 22:28:44.861268 1049903 start.go:159] libmachine.API.Create for "auto-640880" (driver="docker")
	I1120 22:28:44.861314 1049903 client.go:173] LocalClient.Create starting
	I1120 22:28:44.861383 1049903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem
	I1120 22:28:44.861422 1049903 main.go:143] libmachine: Decoding PEM data...
	I1120 22:28:44.861439 1049903 main.go:143] libmachine: Parsing certificate...
	I1120 22:28:44.861505 1049903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem
	I1120 22:28:44.861529 1049903 main.go:143] libmachine: Decoding PEM data...
	I1120 22:28:44.861542 1049903 main.go:143] libmachine: Parsing certificate...
	I1120 22:28:44.861948 1049903 cli_runner.go:164] Run: docker network inspect auto-640880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 22:28:44.877903 1049903 cli_runner.go:211] docker network inspect auto-640880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 22:28:44.877988 1049903 network_create.go:284] running [docker network inspect auto-640880] to gather additional debugging logs...
	I1120 22:28:44.878007 1049903 cli_runner.go:164] Run: docker network inspect auto-640880
	W1120 22:28:44.894593 1049903 cli_runner.go:211] docker network inspect auto-640880 returned with exit code 1
	I1120 22:28:44.894620 1049903 network_create.go:287] error running [docker network inspect auto-640880]: docker network inspect auto-640880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-640880 not found
	I1120 22:28:44.894632 1049903 network_create.go:289] output of [docker network inspect auto-640880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-640880 not found
	
	** /stderr **
	I1120 22:28:44.894744 1049903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:28:44.911180 1049903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad232b357b1b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:e5:2b:94:2e:bb} reservation:<nil>}
	I1120 22:28:44.911627 1049903 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6d47b47b5eb7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:61:6b:56:c9:db} reservation:<nil>}
	I1120 22:28:44.911875 1049903 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8999df1e8509 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:04:87:b7:55:e1} reservation:<nil>}
	I1120 22:28:44.912294 1049903 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ec7f0}
	I1120 22:28:44.912316 1049903 network_create.go:124] attempt to create docker network auto-640880 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1120 22:28:44.912371 1049903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-640880 auto-640880
	I1120 22:28:44.979995 1049903 network_create.go:108] docker network auto-640880 192.168.76.0/24 created
	I1120 22:28:44.980027 1049903 kic.go:121] calculated static IP "192.168.76.2" for the "auto-640880" container
	I1120 22:28:44.980113 1049903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 22:28:44.996188 1049903 cli_runner.go:164] Run: docker volume create auto-640880 --label name.minikube.sigs.k8s.io=auto-640880 --label created_by.minikube.sigs.k8s.io=true
	I1120 22:28:45.081736 1049903 oci.go:103] Successfully created a docker volume auto-640880
	I1120 22:28:45.081854 1049903 cli_runner.go:164] Run: docker run --rm --name auto-640880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-640880 --entrypoint /usr/bin/test -v auto-640880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 22:28:45.683369 1049903 oci.go:107] Successfully prepared a docker volume auto-640880
	I1120 22:28:45.683446 1049903 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:45.683459 1049903 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 22:28:45.683545 1049903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-640880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 22:28:47.250832 1050333 out.go:252] * Restarting existing docker container for "no-preload-041029" ...
	I1120 22:28:47.250949 1050333 cli_runner.go:164] Run: docker start no-preload-041029
	I1120 22:28:47.597200 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:47.619881 1050333 kic.go:430] container "no-preload-041029" state is running.
	I1120 22:28:47.620266 1050333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:28:47.651626 1050333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json ...
	I1120 22:28:47.651888 1050333 machine.go:94] provisionDockerMachine start ...
	I1120 22:28:47.651949 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:47.678669 1050333 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:47.679032 1050333 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1120 22:28:47.679043 1050333 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:28:47.679992 1050333 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46530->127.0.0.1:34202: read: connection reset by peer
	I1120 22:28:50.874661 1050333 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-041029
	
	I1120 22:28:50.874687 1050333 ubuntu.go:182] provisioning hostname "no-preload-041029"
	I1120 22:28:50.874771 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:50.898032 1050333 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:50.898340 1050333 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1120 22:28:50.898357 1050333 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-041029 && echo "no-preload-041029" | sudo tee /etc/hostname
	I1120 22:28:51.098472 1050333 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-041029
	
	I1120 22:28:51.098719 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:51.159080 1050333 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:51.159414 1050333 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1120 22:28:51.159432 1050333 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-041029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-041029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-041029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:28:51.351104 1050333 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:28:51.351133 1050333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:28:51.351168 1050333 ubuntu.go:190] setting up certificates
	I1120 22:28:51.351178 1050333 provision.go:84] configureAuth start
	I1120 22:28:51.351250 1050333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:28:51.420472 1050333 provision.go:143] copyHostCerts
	I1120 22:28:51.420543 1050333 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:28:51.420564 1050333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:28:51.420651 1050333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:28:51.420758 1050333 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:28:51.420770 1050333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:28:51.420799 1050333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:28:51.420864 1050333 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:28:51.420874 1050333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:28:51.420900 1050333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:28:51.420962 1050333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.no-preload-041029 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-041029]
	I1120 22:28:50.462448 1049903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-640880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.778860564s)
	I1120 22:28:50.462476 1049903 kic.go:203] duration metric: took 4.779014232s to extract preloaded images to volume ...
	W1120 22:28:50.462598 1049903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 22:28:50.462698 1049903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 22:28:50.556356 1049903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-640880 --name auto-640880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-640880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-640880 --network auto-640880 --ip 192.168.76.2 --volume auto-640880:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 22:28:50.923079 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Running}}
	I1120 22:28:50.944544 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:28:50.968532 1049903 cli_runner.go:164] Run: docker exec auto-640880 stat /var/lib/dpkg/alternatives/iptables
	I1120 22:28:51.033619 1049903 oci.go:144] the created container "auto-640880" has a running status.
	I1120 22:28:51.033646 1049903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa...
	I1120 22:28:51.524395 1049903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 22:28:51.573016 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:28:51.621977 1049903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 22:28:51.622000 1049903 kic_runner.go:114] Args: [docker exec --privileged auto-640880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 22:28:51.678357 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:28:51.699470 1049903 machine.go:94] provisionDockerMachine start ...
	I1120 22:28:51.699569 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:51.724610 1049903 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:51.724950 1049903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1120 22:28:51.724965 1049903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:28:51.725622 1049903 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:28:52.429236 1050333 provision.go:177] copyRemoteCerts
	I1120 22:28:52.429314 1050333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:28:52.429360 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:52.447110 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:52.550833 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:28:52.599068 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:28:52.629491 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 22:28:52.654913 1050333 provision.go:87] duration metric: took 1.303705784s to configureAuth
	I1120 22:28:52.654948 1050333 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:28:52.655175 1050333 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:52.655306 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:52.677138 1050333 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:52.677562 1050333 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1120 22:28:52.677578 1050333 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:28:53.115956 1050333 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:28:53.115984 1050333 machine.go:97] duration metric: took 5.464085244s to provisionDockerMachine
	I1120 22:28:53.115995 1050333 start.go:293] postStartSetup for "no-preload-041029" (driver="docker")
	I1120 22:28:53.116006 1050333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:28:53.116081 1050333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:28:53.116125 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:53.143646 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:53.251536 1050333 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:28:53.254768 1050333 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:28:53.254799 1050333 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:28:53.254811 1050333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:28:53.254869 1050333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:28:53.254957 1050333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:28:53.255094 1050333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:28:53.263280 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:53.280711 1050333 start.go:296] duration metric: took 164.699249ms for postStartSetup
	I1120 22:28:53.280805 1050333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:28:53.280857 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:53.298110 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:53.395855 1050333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:28:53.400956 1050333 fix.go:56] duration metric: took 6.178317856s for fixHost
	I1120 22:28:53.400983 1050333 start.go:83] releasing machines lock for "no-preload-041029", held for 6.178370443s
	I1120 22:28:53.401054 1050333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:28:53.421021 1050333 ssh_runner.go:195] Run: cat /version.json
	I1120 22:28:53.421046 1050333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:28:53.421084 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:53.421107 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:53.443258 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:53.455289 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:53.651741 1050333 ssh_runner.go:195] Run: systemctl --version
	I1120 22:28:53.658089 1050333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:28:53.694739 1050333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:28:53.699554 1050333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:28:53.699658 1050333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:28:53.708759 1050333 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:28:53.708845 1050333 start.go:496] detecting cgroup driver to use...
	I1120 22:28:53.708907 1050333 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:28:53.708988 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:28:53.724295 1050333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:28:53.737350 1050333 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:28:53.737463 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:28:53.753774 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:28:53.767201 1050333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:28:53.877453 1050333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:28:53.991722 1050333 docker.go:234] disabling docker service ...
	I1120 22:28:53.991791 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:28:54.008192 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:28:54.023009 1050333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:28:54.145769 1050333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:28:54.283262 1050333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:28:54.298921 1050333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:28:54.313307 1050333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:28:54.313400 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.323056 1050333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:28:54.323125 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.333281 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.344299 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.353853 1050333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:28:54.362294 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.371935 1050333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.380727 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.389474 1050333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:28:54.397191 1050333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:28:54.404547 1050333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:54.513905 1050333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:28:54.708369 1050333 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:28:54.708481 1050333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:28:54.712961 1050333 start.go:564] Will wait 60s for crictl version
	I1120 22:28:54.713070 1050333 ssh_runner.go:195] Run: which crictl
	I1120 22:28:54.717130 1050333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:28:54.762764 1050333 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:28:54.762942 1050333 ssh_runner.go:195] Run: crio --version
	I1120 22:28:54.814802 1050333 ssh_runner.go:195] Run: crio --version
	I1120 22:28:54.850050 1050333 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:28:54.853052 1050333 cli_runner.go:164] Run: docker network inspect no-preload-041029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:28:54.875844 1050333 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:28:54.879963 1050333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:54.893033 1050333 kubeadm.go:884] updating cluster {Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:28:54.893151 1050333 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:54.893196 1050333 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:54.937492 1050333 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:54.937528 1050333 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:28:54.937549 1050333 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1120 22:28:54.937662 1050333 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-041029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:28:54.937766 1050333 ssh_runner.go:195] Run: crio config
	I1120 22:28:55.014066 1050333 cni.go:84] Creating CNI manager for ""
	I1120 22:28:55.014153 1050333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:55.014190 1050333 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:28:55.014256 1050333 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-041029 NodeName:no-preload-041029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:28:55.014465 1050333 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-041029"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:28:55.014593 1050333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:28:55.025016 1050333 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:28:55.025106 1050333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:28:55.034051 1050333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1120 22:28:55.049630 1050333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:28:55.065414 1050333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 22:28:55.081442 1050333 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:28:55.089685 1050333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:55.100952 1050333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:55.246906 1050333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:55.263154 1050333 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029 for IP: 192.168.85.2
	I1120 22:28:55.263178 1050333 certs.go:195] generating shared ca certs ...
	I1120 22:28:55.263196 1050333 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:55.263342 1050333 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:28:55.263404 1050333 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:28:55.263416 1050333 certs.go:257] generating profile certs ...
	I1120 22:28:55.263541 1050333 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.key
	I1120 22:28:55.263612 1050333 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key.20ef11a6
	I1120 22:28:55.263658 1050333 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.key
	I1120 22:28:55.263773 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:28:55.263806 1050333 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:28:55.263820 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:28:55.263846 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:28:55.263873 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:28:55.263897 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:28:55.263943 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:55.264578 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:28:55.315139 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:28:55.384309 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:28:55.479609 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:28:55.548932 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 22:28:55.578866 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:28:55.603567 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:28:55.622604 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:28:55.639360 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:28:55.656783 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:28:55.674169 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:28:55.694865 1050333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:28:55.709310 1050333 ssh_runner.go:195] Run: openssl version
	I1120 22:28:55.716117 1050333 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:28:55.724498 1050333 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:28:55.733344 1050333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:28:55.737309 1050333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:28:55.737371 1050333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:28:55.779270 1050333 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:28:55.786548 1050333 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:28:55.793529 1050333 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:28:55.800668 1050333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:28:55.805029 1050333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:28:55.805101 1050333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:28:55.847457 1050333 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:28:55.855830 1050333 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:55.862971 1050333 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:28:55.870398 1050333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:55.874783 1050333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:55.874891 1050333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:55.917249 1050333 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:28:55.924891 1050333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:28:55.929113 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:28:56.014380 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:28:56.100019 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:28:56.183635 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:28:56.259676 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:28:56.392223 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:28:56.491899 1050333 kubeadm.go:401] StartCluster: {Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:56.491983 1050333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:28:56.492054 1050333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:28:56.550231 1050333 cri.go:89] found id: "e42bdea342f42392b071351be610744a76403aa1460991517dc30c6622b12fab"
	I1120 22:28:56.550255 1050333 cri.go:89] found id: "0962480e895b00f5e5f7566371faa096c72149db953c264531067463575412d0"
	I1120 22:28:56.550260 1050333 cri.go:89] found id: "f023b4b884cd598958f1afa19540045fe5a0c2be9cb914f11b375b8788914863"
	I1120 22:28:56.550264 1050333 cri.go:89] found id: "1ed9b7cf8d08106500bd207cf6aeb94655fa86b8f7e5a5e12ea8481115f296b6"
	I1120 22:28:56.550267 1050333 cri.go:89] found id: ""
	I1120 22:28:56.550333 1050333 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:28:56.585008 1050333 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:56Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:28:56.585095 1050333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:28:56.603572 1050333 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:28:56.603596 1050333 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:28:56.603651 1050333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:28:56.615405 1050333 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:28:56.615896 1050333 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-041029" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:56.616055 1050333 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-041029" cluster setting kubeconfig missing "no-preload-041029" context setting]
	I1120 22:28:56.616448 1050333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:56.617968 1050333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:28:56.630718 1050333 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1120 22:28:56.630748 1050333 kubeadm.go:602] duration metric: took 27.146417ms to restartPrimaryControlPlane
	I1120 22:28:56.630757 1050333 kubeadm.go:403] duration metric: took 138.867188ms to StartCluster
	I1120 22:28:56.630774 1050333 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:56.630830 1050333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:56.631464 1050333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:56.631665 1050333 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:28:56.632134 1050333 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:56.632198 1050333 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:28:56.632266 1050333 addons.go:70] Setting storage-provisioner=true in profile "no-preload-041029"
	I1120 22:28:56.632284 1050333 addons.go:239] Setting addon storage-provisioner=true in "no-preload-041029"
	W1120 22:28:56.632295 1050333 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:28:56.632319 1050333 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:28:56.632771 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:56.632948 1050333 addons.go:70] Setting dashboard=true in profile "no-preload-041029"
	I1120 22:28:56.632994 1050333 addons.go:239] Setting addon dashboard=true in "no-preload-041029"
	W1120 22:28:56.633021 1050333 addons.go:248] addon dashboard should already be in state true
	I1120 22:28:56.633063 1050333 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:28:56.633514 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:56.635922 1050333 addons.go:70] Setting default-storageclass=true in profile "no-preload-041029"
	I1120 22:28:56.636051 1050333 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-041029"
	I1120 22:28:56.636565 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:56.639874 1050333 out.go:179] * Verifying Kubernetes components...
	I1120 22:28:56.643083 1050333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:56.675012 1050333 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:28:56.680484 1050333 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:28:56.683297 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:28:56.683321 1050333 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:28:56.683410 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:56.688065 1050333 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:28:56.690934 1050333 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:56.690956 1050333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:28:56.691034 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:56.697844 1050333 addons.go:239] Setting addon default-storageclass=true in "no-preload-041029"
	W1120 22:28:56.697873 1050333 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:28:56.697899 1050333 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:28:56.698301 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:56.726748 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:56.735233 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:56.750048 1050333 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:56.750069 1050333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:28:56.750135 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:56.782067 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:54.875072 1049903 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-640880
	
	I1120 22:28:54.875105 1049903 ubuntu.go:182] provisioning hostname "auto-640880"
	I1120 22:28:54.875176 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:54.898043 1049903 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:54.898342 1049903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1120 22:28:54.898354 1049903 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-640880 && echo "auto-640880" | sudo tee /etc/hostname
	I1120 22:28:55.080850 1049903 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-640880
	
	I1120 22:28:55.080947 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:55.104479 1049903 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:55.104782 1049903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1120 22:28:55.104799 1049903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-640880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-640880/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-640880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:28:55.271770 1049903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:28:55.271799 1049903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:28:55.271824 1049903 ubuntu.go:190] setting up certificates
	I1120 22:28:55.271843 1049903 provision.go:84] configureAuth start
	I1120 22:28:55.271913 1049903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-640880
	I1120 22:28:55.292669 1049903 provision.go:143] copyHostCerts
	I1120 22:28:55.292730 1049903 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:28:55.292739 1049903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:28:55.292839 1049903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:28:55.292933 1049903 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:28:55.292938 1049903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:28:55.292963 1049903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:28:55.293022 1049903 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:28:55.293026 1049903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:28:55.293048 1049903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:28:55.293102 1049903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.auto-640880 san=[127.0.0.1 192.168.76.2 auto-640880 localhost minikube]
	I1120 22:28:56.135450 1049903 provision.go:177] copyRemoteCerts
	I1120 22:28:56.135524 1049903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:28:56.135584 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:56.155231 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:56.268691 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:28:56.301643 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1120 22:28:56.335802 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:28:56.367801 1049903 provision.go:87] duration metric: took 1.095943238s to configureAuth
	I1120 22:28:56.367825 1049903 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:28:56.368009 1049903 config.go:182] Loaded profile config "auto-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:56.368111 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:56.390448 1049903 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:56.390754 1049903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1120 22:28:56.390771 1049903 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:28:56.850892 1049903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:28:56.850921 1049903 machine.go:97] duration metric: took 5.151431577s to provisionDockerMachine
	I1120 22:28:56.850931 1049903 client.go:176] duration metric: took 11.989606002s to LocalClient.Create
	I1120 22:28:56.850944 1049903 start.go:167] duration metric: took 11.989678167s to libmachine.API.Create "auto-640880"
	I1120 22:28:56.850951 1049903 start.go:293] postStartSetup for "auto-640880" (driver="docker")
	I1120 22:28:56.850961 1049903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:28:56.851048 1049903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:28:56.851090 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:56.884017 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:57.017376 1049903 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:28:57.026925 1049903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:28:57.026956 1049903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:28:57.026968 1049903 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:28:57.027082 1049903 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:28:57.027174 1049903 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:28:57.027287 1049903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:28:57.042311 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:57.069158 1049903 start.go:296] duration metric: took 218.191768ms for postStartSetup
	I1120 22:28:57.069546 1049903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-640880
	I1120 22:28:57.099422 1049903 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/config.json ...
	I1120 22:28:57.099692 1049903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:28:57.099740 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:57.125058 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:57.236381 1049903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:28:57.243597 1049903 start.go:128] duration metric: took 12.385866674s to createHost
	I1120 22:28:57.243621 1049903 start.go:83] releasing machines lock for "auto-640880", held for 12.386006458s
	I1120 22:28:57.243692 1049903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-640880
	I1120 22:28:57.268653 1049903 ssh_runner.go:195] Run: cat /version.json
	I1120 22:28:57.268713 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:57.268882 1049903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:28:57.268951 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:57.301045 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:57.312731 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:57.411645 1049903 ssh_runner.go:195] Run: systemctl --version
	I1120 22:28:57.573405 1049903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:28:57.641652 1049903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:28:57.652155 1049903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:28:57.652240 1049903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:28:57.695406 1049903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 22:28:57.695432 1049903 start.go:496] detecting cgroup driver to use...
	I1120 22:28:57.695473 1049903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:28:57.695538 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:28:57.724795 1049903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:28:57.744122 1049903 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:28:57.744200 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:28:57.773197 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:28:57.808041 1049903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:28:58.043900 1049903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:28:58.265449 1049903 docker.go:234] disabling docker service ...
	I1120 22:28:58.265556 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:28:58.306569 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:28:58.339318 1049903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:28:58.572526 1049903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:28:58.804459 1049903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:28:58.836917 1049903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:28:58.868813 1049903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:28:58.868933 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.884482 1049903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:28:58.884600 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.897530 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.908460 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.926720 1049903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:28:58.937528 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.948386 1049903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.968284 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.988499 1049903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:28:58.997052 1049903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:28:59.014305 1049903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:59.243331 1049903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:28:59.496139 1049903 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:28:59.496262 1049903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:28:59.503527 1049903 start.go:564] Will wait 60s for crictl version
	I1120 22:28:59.503648 1049903 ssh_runner.go:195] Run: which crictl
	I1120 22:28:59.511793 1049903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:28:59.555814 1049903 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:28:59.555970 1049903 ssh_runner.go:195] Run: crio --version
	I1120 22:28:59.608583 1049903 ssh_runner.go:195] Run: crio --version
	I1120 22:28:59.661226 1049903 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:28:57.012188 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:28:57.012219 1050333 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:28:57.051569 1050333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:57.072533 1050333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:57.099956 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:28:57.099986 1050333 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:28:57.122439 1050333 node_ready.go:35] waiting up to 6m0s for node "no-preload-041029" to be "Ready" ...
	I1120 22:28:57.167291 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:28:57.167311 1050333 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:28:57.186309 1050333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:57.189761 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:28:57.189780 1050333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:28:57.203316 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:28:57.203337 1050333 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:28:57.216795 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:28:57.216869 1050333 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:28:57.335602 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:28:57.335624 1050333 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:28:57.404367 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:28:57.404388 1050333 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:28:57.463075 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:28:57.463096 1050333 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:28:57.484333 1050333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:28:59.664405 1049903 cli_runner.go:164] Run: docker network inspect auto-640880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:28:59.688694 1049903 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 22:28:59.692764 1049903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:59.709392 1049903 kubeadm.go:884] updating cluster {Name:auto-640880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-640880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:28:59.709515 1049903 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:59.709574 1049903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:59.777321 1049903 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:59.777347 1049903 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:28:59.777403 1049903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:59.828630 1049903 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:59.828656 1049903 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:28:59.828665 1049903 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1120 22:28:59.828756 1049903 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-640880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-640880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:28:59.828856 1049903 ssh_runner.go:195] Run: crio config
	I1120 22:28:59.937215 1049903 cni.go:84] Creating CNI manager for ""
	I1120 22:28:59.937247 1049903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:59.937264 1049903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:28:59.937289 1049903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-640880 NodeName:auto-640880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:28:59.937433 1049903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-640880"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:28:59.937518 1049903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:28:59.946748 1049903 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:28:59.946845 1049903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:28:59.961540 1049903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1120 22:28:59.976750 1049903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:28:59.997587 1049903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1120 22:29:00.022343 1049903 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:29:00.047600 1049903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:29:00.149327 1049903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:29:00.419288 1049903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:29:00.447971 1049903 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880 for IP: 192.168.76.2
	I1120 22:29:00.447995 1049903 certs.go:195] generating shared ca certs ...
	I1120 22:29:00.448012 1049903 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.448161 1049903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:29:00.448217 1049903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:29:00.448239 1049903 certs.go:257] generating profile certs ...
	I1120 22:29:00.448323 1049903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.key
	I1120 22:29:00.448341 1049903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt with IP's: []
	I1120 22:29:00.758657 1049903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt ...
	I1120 22:29:00.758690 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: {Name:mk90d4fb34cbe7c69e3bbf6c05cb072350bd032a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.758878 1049903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.key ...
	I1120 22:29:00.758894 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.key: {Name:mk53abed259f75db5a291342c90e4e112df02021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.758998 1049903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key.2c58ae48
	I1120 22:29:00.759022 1049903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt.2c58ae48 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1120 22:29:00.859616 1049903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt.2c58ae48 ...
	I1120 22:29:00.859646 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt.2c58ae48: {Name:mk642baf5a111a12d0f0d63615b99c5469178f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.859817 1049903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key.2c58ae48 ...
	I1120 22:29:00.859832 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key.2c58ae48: {Name:mk280b679d983240eb64192783e31425cb0b6544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.859983 1049903 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt.2c58ae48 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt
	I1120 22:29:00.860090 1049903 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key.2c58ae48 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key
	I1120 22:29:00.860152 1049903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.key
	I1120 22:29:00.860172 1049903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.crt with IP's: []
	I1120 22:29:01.254424 1049903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.crt ...
	I1120 22:29:01.254456 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.crt: {Name:mk8d2462da535744bcf7c352150cedc78a8fed08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:01.254668 1049903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.key ...
	I1120 22:29:01.254682 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.key: {Name:mkdf81ac8fd20690459059ae6a3069d670325518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:01.254890 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:29:01.254934 1049903 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:29:01.254952 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:29:01.254991 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:29:01.255018 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:29:01.255047 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:29:01.255093 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:29:01.255738 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:29:01.307799 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:29:01.355717 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:29:01.394344 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:29:01.428805 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1120 22:29:01.456456 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 22:29:01.489151 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:29:01.515089 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:29:01.542389 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:29:01.572758 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:29:01.600767 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:29:01.636617 1049903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:29:01.659167 1049903 ssh_runner.go:195] Run: openssl version
	I1120 22:29:01.676600 1049903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:29:01.689018 1049903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:29:01.704056 1049903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:29:01.708529 1049903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:29:01.708650 1049903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:29:01.775871 1049903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:29:01.784450 1049903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8368522.pem /etc/ssl/certs/3ec20f2e.0
	I1120 22:29:01.797962 1049903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:29:01.810106 1049903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:29:01.819242 1049903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:29:01.824760 1049903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:29:01.824841 1049903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:29:01.878564 1049903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:29:01.889545 1049903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 22:29:01.905229 1049903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:29:01.917318 1049903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:29:01.931804 1049903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:29:01.939507 1049903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:29:01.939584 1049903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:29:01.997692 1049903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:29:02.007263 1049903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/836852.pem /etc/ssl/certs/51391683.0
	I1120 22:29:02.017114 1049903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:29:02.023773 1049903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 22:29:02.023842 1049903 kubeadm.go:401] StartCluster: {Name:auto-640880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-640880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:29:02.023919 1049903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:29:02.023989 1049903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:29:02.085413 1049903 cri.go:89] found id: ""
	I1120 22:29:02.085505 1049903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:29:02.100099 1049903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 22:29:02.113731 1049903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 22:29:02.113798 1049903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 22:29:02.126478 1049903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 22:29:02.126498 1049903 kubeadm.go:158] found existing configuration files:
	
	I1120 22:29:02.126549 1049903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 22:29:02.134938 1049903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 22:29:02.135092 1049903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 22:29:02.147376 1049903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 22:29:02.160754 1049903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 22:29:02.160822 1049903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 22:29:02.172465 1049903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 22:29:02.187596 1049903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 22:29:02.187664 1049903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 22:29:02.212846 1049903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 22:29:02.235402 1049903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 22:29:02.235525 1049903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 22:29:02.253624 1049903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 22:29:02.364408 1049903 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 22:29:02.364467 1049903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 22:29:02.415753 1049903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 22:29:02.415840 1049903 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 22:29:02.415877 1049903 kubeadm.go:319] OS: Linux
	I1120 22:29:02.415932 1049903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 22:29:02.415984 1049903 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 22:29:02.416034 1049903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 22:29:02.416084 1049903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 22:29:02.416134 1049903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 22:29:02.416184 1049903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 22:29:02.416232 1049903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 22:29:02.416282 1049903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 22:29:02.416330 1049903 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 22:29:02.527409 1049903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 22:29:02.527529 1049903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 22:29:02.527618 1049903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 22:29:02.551348 1049903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 22:29:02.556995 1049903 out.go:252]   - Generating certificates and keys ...
	I1120 22:29:02.557092 1049903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 22:29:02.557159 1049903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 22:29:03.091226 1049903 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 22:29:03.803480 1049903 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 22:29:03.978323 1050333 node_ready.go:49] node "no-preload-041029" is "Ready"
	I1120 22:29:03.978361 1050333 node_ready.go:38] duration metric: took 6.855862285s for node "no-preload-041029" to be "Ready" ...
	I1120 22:29:03.978376 1050333 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:29:03.978440 1050333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:29:07.068205 1050333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.995586676s)
	I1120 22:29:07.068270 1050333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.881942632s)
	I1120 22:29:07.068605 1050333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.584242906s)
	I1120 22:29:07.068860 1050333 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.090396824s)
	I1120 22:29:07.068888 1050333 api_server.go:72] duration metric: took 10.437202164s to wait for apiserver process to appear ...
	I1120 22:29:07.068895 1050333 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:29:07.068911 1050333 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:29:07.071877 1050333 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-041029 addons enable metrics-server
	
	I1120 22:29:07.084783 1050333 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:29:07.086657 1050333 api_server.go:141] control plane version: v1.34.1
	I1120 22:29:07.086689 1050333 api_server.go:131] duration metric: took 17.787297ms to wait for apiserver health ...
	I1120 22:29:07.086698 1050333 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:29:07.095411 1050333 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1120 22:29:05.143385 1049903 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 22:29:05.476155 1049903 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 22:29:05.600076 1049903 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 22:29:05.600745 1049903 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-640880 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 22:29:07.152078 1049903 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 22:29:07.152593 1049903 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-640880 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 22:29:07.483605 1049903 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 22:29:07.815463 1049903 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 22:29:08.274737 1049903 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 22:29:08.275078 1049903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 22:29:08.483583 1049903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 22:29:08.605499 1049903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 22:29:08.774768 1049903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 22:29:09.135567 1049903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 22:29:09.281598 1049903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 22:29:09.282211 1049903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 22:29:09.290357 1049903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 22:29:09.293945 1049903 out.go:252]   - Booting up control plane ...
	I1120 22:29:09.294057 1049903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 22:29:09.294138 1049903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 22:29:09.294208 1049903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 22:29:09.318381 1049903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 22:29:09.318629 1049903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 22:29:09.324368 1049903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 22:29:09.324767 1049903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 22:29:09.324858 1049903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 22:29:09.471538 1049903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 22:29:09.471735 1049903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 22:29:07.096295 1050333 system_pods.go:59] 8 kube-system pods found
	I1120 22:29:07.096326 1050333 system_pods.go:61] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:29:07.096335 1050333 system_pods.go:61] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:29:07.096341 1050333 system_pods.go:61] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:29:07.096354 1050333 system_pods.go:61] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:29:07.096361 1050333 system_pods.go:61] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:29:07.096367 1050333 system_pods.go:61] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:29:07.096374 1050333 system_pods.go:61] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:29:07.096379 1050333 system_pods.go:61] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Running
	I1120 22:29:07.096384 1050333 system_pods.go:74] duration metric: took 9.681453ms to wait for pod list to return data ...
	I1120 22:29:07.096392 1050333 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:29:07.098549 1050333 addons.go:515] duration metric: took 10.466348376s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 22:29:07.100759 1050333 default_sa.go:45] found service account: "default"
	I1120 22:29:07.100783 1050333 default_sa.go:55] duration metric: took 4.384778ms for default service account to be created ...
	I1120 22:29:07.100797 1050333 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:29:07.105004 1050333 system_pods.go:86] 8 kube-system pods found
	I1120 22:29:07.105112 1050333 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:29:07.105165 1050333 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:29:07.105189 1050333 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:29:07.105218 1050333 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:29:07.105260 1050333 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:29:07.105294 1050333 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:29:07.105340 1050333 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:29:07.105364 1050333 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Running
	I1120 22:29:07.105392 1050333 system_pods.go:126] duration metric: took 4.587965ms to wait for k8s-apps to be running ...
	I1120 22:29:07.105436 1050333 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:29:07.105556 1050333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:29:07.133360 1050333 system_svc.go:56] duration metric: took 27.91368ms WaitForService to wait for kubelet
	I1120 22:29:07.133473 1050333 kubeadm.go:587] duration metric: took 10.501779872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:29:07.133512 1050333 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:29:07.139028 1050333 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:29:07.139134 1050333 node_conditions.go:123] node cpu capacity is 2
	I1120 22:29:07.139164 1050333 node_conditions.go:105] duration metric: took 5.609032ms to run NodePressure ...
	I1120 22:29:07.139210 1050333 start.go:242] waiting for startup goroutines ...
	I1120 22:29:07.139237 1050333 start.go:247] waiting for cluster config update ...
	I1120 22:29:07.139287 1050333 start.go:256] writing updated cluster config ...
	I1120 22:29:07.139773 1050333 ssh_runner.go:195] Run: rm -f paused
	I1120 22:29:07.149742 1050333 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:29:07.155456 1050333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:29:09.185073 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:11.662917 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:10.470460 1049903 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00166109s
	I1120 22:29:10.470579 1049903 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 22:29:10.470667 1049903 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1120 22:29:10.470763 1049903 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 22:29:10.470847 1049903 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1120 22:29:13.667517 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:16.164074 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:15.828858 1049903 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.358595078s
	I1120 22:29:19.040497 1049903 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.570637319s
	I1120 22:29:20.972014 1049903 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.501962676s
	I1120 22:29:20.993298 1049903 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 22:29:21.018729 1049903 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 22:29:21.040956 1049903 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 22:29:21.041450 1049903 kubeadm.go:319] [mark-control-plane] Marking the node auto-640880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 22:29:21.060481 1049903 kubeadm.go:319] [bootstrap-token] Using token: rwnehs.sqap5qw5j7cco1yz
	W1120 22:29:18.661798 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:20.662689 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:21.063593 1049903 out.go:252]   - Configuring RBAC rules ...
	I1120 22:29:21.063715 1049903 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 22:29:21.069350 1049903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 22:29:21.085918 1049903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 22:29:21.093124 1049903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 22:29:21.098456 1049903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 22:29:21.103552 1049903 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 22:29:21.382530 1049903 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 22:29:21.853611 1049903 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 22:29:22.401100 1049903 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 22:29:22.403510 1049903 kubeadm.go:319] 
	I1120 22:29:22.403599 1049903 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 22:29:22.403607 1049903 kubeadm.go:319] 
	I1120 22:29:22.403685 1049903 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 22:29:22.403691 1049903 kubeadm.go:319] 
	I1120 22:29:22.403716 1049903 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 22:29:22.406693 1049903 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 22:29:22.406755 1049903 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 22:29:22.406761 1049903 kubeadm.go:319] 
	I1120 22:29:22.406815 1049903 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 22:29:22.406819 1049903 kubeadm.go:319] 
	I1120 22:29:22.406867 1049903 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 22:29:22.406872 1049903 kubeadm.go:319] 
	I1120 22:29:22.406923 1049903 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 22:29:22.407015 1049903 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 22:29:22.407091 1049903 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 22:29:22.407096 1049903 kubeadm.go:319] 
	I1120 22:29:22.407461 1049903 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 22:29:22.407548 1049903 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 22:29:22.407554 1049903 kubeadm.go:319] 
	I1120 22:29:22.407889 1049903 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rwnehs.sqap5qw5j7cco1yz \
	I1120 22:29:22.407999 1049903 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 \
	I1120 22:29:22.408242 1049903 kubeadm.go:319] 	--control-plane 
	I1120 22:29:22.408264 1049903 kubeadm.go:319] 
	I1120 22:29:22.408567 1049903 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 22:29:22.408578 1049903 kubeadm.go:319] 
	I1120 22:29:22.408903 1049903 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rwnehs.sqap5qw5j7cco1yz \
	I1120 22:29:22.409196 1049903 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 
	I1120 22:29:22.432719 1049903 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 22:29:22.432956 1049903 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 22:29:22.433066 1049903 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 22:29:22.433081 1049903 cni.go:84] Creating CNI manager for ""
	I1120 22:29:22.433088 1049903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:29:22.437136 1049903 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 22:29:22.440483 1049903 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 22:29:22.462313 1049903 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 22:29:22.462336 1049903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 22:29:22.531123 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 22:29:23.761761 1049903 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.230554549s)
	I1120 22:29:23.761799 1049903 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 22:29:23.761908 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:23.762001 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-640880 minikube.k8s.io/updated_at=2025_11_20T22_29_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=auto-640880 minikube.k8s.io/primary=true
	I1120 22:29:24.050201 1049903 ops.go:34] apiserver oom_adj: -16
	I1120 22:29:24.050301 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:24.550804 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1120 22:29:22.667175 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:25.161524 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:25.050454 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:25.550448 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:26.050919 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:26.550922 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:27.051017 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:27.551270 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:27.665671 1049903 kubeadm.go:1114] duration metric: took 3.903805301s to wait for elevateKubeSystemPrivileges
	I1120 22:29:27.665698 1049903 kubeadm.go:403] duration metric: took 25.641861801s to StartCluster
	I1120 22:29:27.665715 1049903 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:27.665785 1049903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:29:27.666768 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:27.666992 1049903 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:29:27.667131 1049903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 22:29:27.667350 1049903 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:29:27.667417 1049903 config.go:182] Loaded profile config "auto-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:29:27.667435 1049903 addons.go:70] Setting storage-provisioner=true in profile "auto-640880"
	I1120 22:29:27.667459 1049903 addons.go:70] Setting default-storageclass=true in profile "auto-640880"
	I1120 22:29:27.667463 1049903 addons.go:239] Setting addon storage-provisioner=true in "auto-640880"
	I1120 22:29:27.667470 1049903 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-640880"
	I1120 22:29:27.667489 1049903 host.go:66] Checking if "auto-640880" exists ...
	I1120 22:29:27.667774 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:29:27.667998 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:29:27.671487 1049903 out.go:179] * Verifying Kubernetes components...
	I1120 22:29:27.674628 1049903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:29:27.726045 1049903 addons.go:239] Setting addon default-storageclass=true in "auto-640880"
	I1120 22:29:27.726084 1049903 host.go:66] Checking if "auto-640880" exists ...
	I1120 22:29:27.726493 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:29:27.748403 1049903 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:29:27.751588 1049903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:29:27.751614 1049903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:29:27.751682 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:29:27.761149 1049903 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:29:27.761173 1049903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:29:27.761235 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:29:27.787655 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:29:27.803285 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:29:28.173260 1049903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 22:29:28.173408 1049903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:29:28.245635 1049903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:29:28.260663 1049903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:29:28.795398 1049903 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 22:29:28.796667 1049903 node_ready.go:35] waiting up to 15m0s for node "auto-640880" to be "Ready" ...
	I1120 22:29:29.123228 1049903 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 22:29:29.126018 1049903 addons.go:515] duration metric: took 1.458658976s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 22:29:29.301033 1049903 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-640880" context rescaled to 1 replicas
	W1120 22:29:27.162277 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:29.661151 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:31.661406 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:30.799811 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:33.299944 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:34.161702 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:36.660881 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:35.300521 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:37.799781 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:38.661289 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:41.160991 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:39.800161 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:42.300257 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:43.161366 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:45.163365 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:47.661660 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:48.161778 1050333 pod_ready.go:94] pod "coredns-66bc5c9577-6dbgj" is "Ready"
	I1120 22:29:48.161812 1050333 pod_ready.go:86] duration metric: took 41.006271316s for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.164719 1050333 pod_ready.go:83] waiting for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.169474 1050333 pod_ready.go:94] pod "etcd-no-preload-041029" is "Ready"
	I1120 22:29:48.169551 1050333 pod_ready.go:86] duration metric: took 4.79957ms for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.171885 1050333 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.176511 1050333 pod_ready.go:94] pod "kube-apiserver-no-preload-041029" is "Ready"
	I1120 22:29:48.176538 1050333 pod_ready.go:86] duration metric: took 4.623896ms for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.179295 1050333 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.359242 1050333 pod_ready.go:94] pod "kube-controller-manager-no-preload-041029" is "Ready"
	I1120 22:29:48.359271 1050333 pod_ready.go:86] duration metric: took 179.940486ms for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.559334 1050333 pod_ready.go:83] waiting for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.960142 1050333 pod_ready.go:94] pod "kube-proxy-n78zb" is "Ready"
	I1120 22:29:48.960173 1050333 pod_ready.go:86] duration metric: took 400.801924ms for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:49.159486 1050333 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:49.559637 1050333 pod_ready.go:94] pod "kube-scheduler-no-preload-041029" is "Ready"
	I1120 22:29:49.559665 1050333 pod_ready.go:86] duration metric: took 400.150953ms for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:49.559678 1050333 pod_ready.go:40] duration metric: took 42.409820283s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:29:49.635049 1050333 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:29:49.638160 1050333 out.go:179] * Done! kubectl is now configured to use "no-preload-041029" cluster and "default" namespace by default
	W1120 22:29:44.800627 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:47.299327 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:49.300195 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:51.300393 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:53.799360 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:56.307594 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:58.800406 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 20 22:29:32 no-preload-041029 crio[658]: time="2025-11-20T22:29:32.820415365Z" level=info msg="Removed container 55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz/dashboard-metrics-scraper" id=81c21f73-1fc3-4b1e-871f-05d8fc66b187 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:29:36 no-preload-041029 conmon[1156]: conmon a6f77ff04e1d67a44bd5 <ninfo>: container 1178 exited with status 1
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.811908099Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fe8307dd-4426-4f73-aef1-b7b8af17ea4b name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.813233261Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7072cd69-1f1f-44bd-8cd6-a8077dcbb993 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.814502897Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7e17d36f-0137-4878-8ae4-8241aed161cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.814724513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.821433555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.821759934Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b6533367f3be66a5aa81529eeebca6b162f66898fda1b5a9ec741152a5602d22/merged/etc/passwd: no such file or directory"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.821865905Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b6533367f3be66a5aa81529eeebca6b162f66898fda1b5a9ec741152a5602d22/merged/etc/group: no such file or directory"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.822287358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.844151122Z" level=info msg="Created container 41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf: kube-system/storage-provisioner/storage-provisioner" id=7e17d36f-0137-4878-8ae4-8241aed161cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.845219985Z" level=info msg="Starting container: 41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf" id=dfeb82c2-c69a-44eb-a1af-15492b54d217 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.846930736Z" level=info msg="Started container" PID=1654 containerID=41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf description=kube-system/storage-provisioner/storage-provisioner id=dfeb82c2-c69a-44eb-a1af-15492b54d217 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3750cec50ddd4ceca860591336bab161957c7d5145281763c08ed2394540bf71
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.378629641Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.384125696Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.384162997Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.384185495Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.387700921Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.387740019Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.387764553Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.390866221Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.390901906Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.390927063Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.394116716Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.394154903Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	41ba82d6da898       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           28 seconds ago       Running             storage-provisioner         2                   3750cec50ddd4       storage-provisioner                          kube-system
	203bde87ce2b0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   56a820dde62d1       dashboard-metrics-scraper-6ffb444bf9-gtbnz   kubernetes-dashboard
	d7207e0f6514d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   8c3009aa039e8       kubernetes-dashboard-855c9754f9-5fl85        kubernetes-dashboard
	440ee2ef9222e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   6c4a89b0ad3bd       busybox                                      default
	47eef4f0b9636       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   e384c68fadaf6       coredns-66bc5c9577-6dbgj                     kube-system
	a6f77ff04e1d6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   3750cec50ddd4       storage-provisioner                          kube-system
	e3ff002bcd2e2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   47c7f265d737f       kube-proxy-n78zb                             kube-system
	da42598cf8490       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   85963bf79f54d       kindnet-2fs8p                                kube-system
	e42bdea342f42       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   b5451396c4751       etcd-no-preload-041029                       kube-system
	0962480e895b0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   f40364b846a24       kube-controller-manager-no-preload-041029    kube-system
	f023b4b884cd5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5eb8d2a64ea46       kube-scheduler-no-preload-041029             kube-system
	1ed9b7cf8d081       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   4afb9f6835c85       kube-apiserver-no-preload-041029             kube-system
	
	
	==> coredns [47eef4f0b9636eb9f49ce7cfceedd7b832747ca4656d77970e8755154fc7ac35] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44164 - 12134 "HINFO IN 6698151684193989111.614327327706340683. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.045562792s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-041029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-041029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-041029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_27_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:27:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-041029
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:29:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:29:35 +0000   Thu, 20 Nov 2025 22:27:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:29:35 +0000   Thu, 20 Nov 2025 22:27:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:29:35 +0000   Thu, 20 Nov 2025 22:27:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:29:35 +0000   Thu, 20 Nov 2025 22:28:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-041029
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                c8a9cfc0-4549-4e9b-8f8a-328559b1944e
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-66bc5c9577-6dbgj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 etcd-no-preload-041029                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-2fs8p                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-no-preload-041029              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-no-preload-041029     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-n78zb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-no-preload-041029              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gtbnz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5fl85         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m1s                   kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Warning  CgroupV1                 2m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node no-preload-041029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node no-preload-041029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s (x8 over 2m20s)  kubelet          Node no-preload-041029 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s                   kubelet          Node no-preload-041029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s                   kubelet          Node no-preload-041029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s                   kubelet          Node no-preload-041029 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m4s                   node-controller  Node no-preload-041029 event: Registered Node no-preload-041029 in Controller
	  Normal   NodeReady                107s                   kubelet          Node no-preload-041029 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node no-preload-041029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node no-preload-041029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node no-preload-041029 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node no-preload-041029 event: Registered Node no-preload-041029 in Controller
	
	
	==> dmesg <==
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	[Nov20 22:27] overlayfs: idmapped layers are currently not supported
	[ +44.355242] overlayfs: idmapped layers are currently not supported
	[Nov20 22:28] overlayfs: idmapped layers are currently not supported
	[ +28.528461] overlayfs: idmapped layers are currently not supported
	[Nov20 22:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e42bdea342f42392b071351be610744a76403aa1460991517dc30c6622b12fab] <==
	{"level":"warn","ts":"2025-11-20T22:29:00.859791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:00.893724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:00.954952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.009750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.062763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.095896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.136000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.200887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.264788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.297889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.339423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.395375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.450140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.514908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.556704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.560591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.588060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.627967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.703081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.720593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.785594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.858875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.936213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:02.059485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:02.229292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:30:05 up  5:12,  0 user,  load average: 4.12, 4.14, 3.16
	Linux no-preload-041029 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [da42598cf8490287fd97dafd07a73f5eaa9f8fa0e2bcbe2f23c4598aaec33417] <==
	I1120 22:29:06.161457       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:29:06.203404       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:29:06.203554       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:29:06.203566       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:29:06.203581       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:29:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:29:06.404021       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:29:06.404053       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:29:06.404061       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:29:06.404160       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:29:36.405858       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:29:36.405862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:29:36.406006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 22:29:36.406056       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1120 22:29:37.604203       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:29:37.604311       1 metrics.go:72] Registering metrics
	I1120 22:29:37.605183       1 controller.go:711] "Syncing nftables rules"
	I1120 22:29:46.378324       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:29:46.378363       1 main.go:301] handling current node
	I1120 22:29:56.379782       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:29:56.379816       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ed9b7cf8d08106500bd207cf6aeb94655fa86b8f7e5a5e12ea8481115f296b6] <==
	I1120 22:29:04.426577       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 22:29:04.426825       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:29:04.426894       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 22:29:04.426925       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:29:04.467242       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 22:29:04.467278       1 policy_source.go:240] refreshing policies
	I1120 22:29:04.468114       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:29:04.468153       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 22:29:04.468176       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 22:29:04.468183       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 22:29:04.469702       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 22:29:04.513841       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 22:29:04.537296       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:29:04.637911       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:29:05.544120       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:29:05.775914       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:29:06.219934       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:29:06.393176       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:29:06.561216       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:29:06.908974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.191.26"}
	I1120 22:29:06.933457       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.217.170"}
	W1120 22:29:06.956434       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 22:29:06.957855       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:29:06.964221       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:29:09.319537       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0962480e895b00f5e5f7566371faa096c72149db953c264531067463575412d0] <==
	I1120 22:29:09.225103       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:29:09.235419       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 22:29:09.244724       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:29:09.246005       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 22:29:09.249217       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 22:29:09.249929       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 22:29:09.250425       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-041029"
	I1120 22:29:09.250501       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 22:29:09.249234       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 22:29:09.249645       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 22:29:09.249667       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 22:29:09.255061       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 22:29:09.256094       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:29:09.265452       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:29:09.268967       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:29:09.280715       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 22:29:09.284117       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 22:29:09.293141       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:29:09.293244       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:29:09.293295       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:29:09.293410       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 22:29:09.294785       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:29:09.295171       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 22:29:09.295248       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 22:29:09.295290       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [e3ff002bcd2e24647b6415e521297e2309e2f39cdf9a3f07226779379f304671] <==
	I1120 22:29:06.621001       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:29:06.950857       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:29:07.076910       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:29:07.077013       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:29:07.077125       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:29:07.192198       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:29:07.192359       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:29:07.200516       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:29:07.200960       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:29:07.201163       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:29:07.202379       1 config.go:200] "Starting service config controller"
	I1120 22:29:07.202433       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:29:07.202478       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:29:07.202505       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:29:07.202543       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:29:07.202570       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:29:07.203347       1 config.go:309] "Starting node config controller"
	I1120 22:29:07.206294       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:29:07.206353       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:29:07.302913       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:29:07.303067       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:29:07.303096       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f023b4b884cd598958f1afa19540045fe5a0c2be9cb914f11b375b8788914863] <==
	I1120 22:29:00.948578       1 serving.go:386] Generated self-signed cert in-memory
	I1120 22:29:05.022617       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:29:05.022651       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:29:05.053296       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:29:05.053371       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 22:29:05.053391       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 22:29:05.053416       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 22:29:05.085760       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:29:05.085787       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:29:05.085827       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:29:05.085833       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:29:05.157848       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 22:29:05.186125       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:29:05.186856       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:29:10 no-preload-041029 kubelet[779]: I1120 22:29:10.001055     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22xvl\" (UniqueName: \"kubernetes.io/projected/df232e57-08f8-4065-abe1-33961949ca0f-kube-api-access-22xvl\") pod \"kubernetes-dashboard-855c9754f9-5fl85\" (UID: \"df232e57-08f8-4065-abe1-33961949ca0f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fl85"
	Nov 20 22:29:10 no-preload-041029 kubelet[779]: I1120 22:29:10.001118     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/13af84c9-f7c8-43fb-bff4-db99817b7d82-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gtbnz\" (UID: \"13af84c9-f7c8-43fb-bff4-db99817b7d82\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz"
	Nov 20 22:29:10 no-preload-041029 kubelet[779]: W1120 22:29:10.232321     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/crio-56a820dde62d119d62c0790b01bfca5207eef554578957461c1fcf02235b04de WatchSource:0}: Error finding container 56a820dde62d119d62c0790b01bfca5207eef554578957461c1fcf02235b04de: Status 404 returned error can't find the container with id 56a820dde62d119d62c0790b01bfca5207eef554578957461c1fcf02235b04de
	Nov 20 22:29:10 no-preload-041029 kubelet[779]: W1120 22:29:10.263822     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/crio-8c3009aa039e8fb745234940232e11ebd04578fb98a7020c9b8da858884cfbaf WatchSource:0}: Error finding container 8c3009aa039e8fb745234940232e11ebd04578fb98a7020c9b8da858884cfbaf: Status 404 returned error can't find the container with id 8c3009aa039e8fb745234940232e11ebd04578fb98a7020c9b8da858884cfbaf
	Nov 20 22:29:17 no-preload-041029 kubelet[779]: I1120 22:29:17.738481     779 scope.go:117] "RemoveContainer" containerID="3d94c28c91f1e3c18d9b0fed99b46e64f7c5c7ceb52b979c2d69f870a4afadab"
	Nov 20 22:29:18 no-preload-041029 kubelet[779]: I1120 22:29:18.744777     779 scope.go:117] "RemoveContainer" containerID="3d94c28c91f1e3c18d9b0fed99b46e64f7c5c7ceb52b979c2d69f870a4afadab"
	Nov 20 22:29:18 no-preload-041029 kubelet[779]: I1120 22:29:18.745047     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:18 no-preload-041029 kubelet[779]: E1120 22:29:18.745187     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:19 no-preload-041029 kubelet[779]: I1120 22:29:19.752830     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:19 no-preload-041029 kubelet[779]: E1120 22:29:19.752976     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:20 no-preload-041029 kubelet[779]: I1120 22:29:20.754648     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:20 no-preload-041029 kubelet[779]: E1120 22:29:20.754813     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: I1120 22:29:32.555722     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: I1120 22:29:32.799578     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: I1120 22:29:32.799927     779 scope.go:117] "RemoveContainer" containerID="203bde87ce2b03a82b4c50019e0edb462ab301d6858878f3f25a66a9194a2b97"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: E1120 22:29:32.800077     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: I1120 22:29:32.822835     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fl85" podStartSLOduration=9.914021582 podStartE2EDuration="23.822805172s" podCreationTimestamp="2025-11-20 22:29:09 +0000 UTC" firstStartedPulling="2025-11-20 22:29:10.267542696 +0000 UTC m=+15.003058814" lastFinishedPulling="2025-11-20 22:29:24.176326278 +0000 UTC m=+28.911842404" observedRunningTime="2025-11-20 22:29:24.781714535 +0000 UTC m=+29.517230653" watchObservedRunningTime="2025-11-20 22:29:32.822805172 +0000 UTC m=+37.558321289"
	Nov 20 22:29:36 no-preload-041029 kubelet[779]: I1120 22:29:36.811278     779 scope.go:117] "RemoveContainer" containerID="a6f77ff04e1d67a44bd587841792b8215abd9c076d0500109bc25fc0c3307090"
	Nov 20 22:29:40 no-preload-041029 kubelet[779]: I1120 22:29:40.191365     779 scope.go:117] "RemoveContainer" containerID="203bde87ce2b03a82b4c50019e0edb462ab301d6858878f3f25a66a9194a2b97"
	Nov 20 22:29:40 no-preload-041029 kubelet[779]: E1120 22:29:40.191601     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:52 no-preload-041029 kubelet[779]: I1120 22:29:52.555706     779 scope.go:117] "RemoveContainer" containerID="203bde87ce2b03a82b4c50019e0edb462ab301d6858878f3f25a66a9194a2b97"
	Nov 20 22:29:52 no-preload-041029 kubelet[779]: E1120 22:29:52.555898     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:30:02 no-preload-041029 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:30:02 no-preload-041029 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:30:02 no-preload-041029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d7207e0f6514d7dd0cc35630dc0c8be98fda4a396f91d7842768b91e9cf4adf1] <==
	2025/11/20 22:29:24 Using namespace: kubernetes-dashboard
	2025/11/20 22:29:24 Using in-cluster config to connect to apiserver
	2025/11/20 22:29:24 Using secret token for csrf signing
	2025/11/20 22:29:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 22:29:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 22:29:24 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 22:29:24 Generating JWE encryption key
	2025/11/20 22:29:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 22:29:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 22:29:24 Initializing JWE encryption key from synchronized object
	2025/11/20 22:29:24 Creating in-cluster Sidecar client
	2025/11/20 22:29:24 Serving insecurely on HTTP port: 9090
	2025/11/20 22:29:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:29:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:29:24 Starting overwatch
	
	
	==> storage-provisioner [41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf] <==
	I1120 22:29:36.872512       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 22:29:36.872572       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 22:29:36.875887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:40.330770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:44.590918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:48.189426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:51.244041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:54.266715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:54.271641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:29:54.272038       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:29:54.272278       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-041029_699ac762-ec9e-4c21-8edb-2e4b2d8bdce8!
	I1120 22:29:54.273320       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"415b729b-7223-449b-a0a8-421bccd3a052", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-041029_699ac762-ec9e-4c21-8edb-2e4b2d8bdce8 became leader
	W1120 22:29:54.279208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:54.284310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:29:54.372940       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-041029_699ac762-ec9e-4c21-8edb-2e4b2d8bdce8!
	W1120 22:29:56.287829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:56.292469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:58.296364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:58.303602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:00.309443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:00.327133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:02.330044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:02.334766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:04.345429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:04.355306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a6f77ff04e1d67a44bd587841792b8215abd9c076d0500109bc25fc0c3307090] <==
	I1120 22:29:06.605041       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 22:29:36.607246       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-041029 -n no-preload-041029
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-041029 -n no-preload-041029: exit status 2 (397.852938ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-041029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-041029
helpers_test.go:243: (dbg) docker inspect no-preload-041029:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71",
	        "Created": "2025-11-20T22:27:06.220478605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1050459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T22:28:47.283890474Z",
	            "FinishedAt": "2025-11-20T22:28:45.986274129Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/hostname",
	        "HostsPath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/hosts",
	        "LogPath": "/var/lib/docker/containers/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71-json.log",
	        "Name": "/no-preload-041029",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-041029:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-041029",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71",
	                "LowerDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2-init/diff:/var/lib/docker/overlay2/a4c9aa4ed92f07e1f9ef5fad5b1b05318ab2a97b3c4901904f0ee85afe8c96a2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/347a8e7c579702d7f062fae7b11d653ced871676130268852dcdc03b14302db2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-041029",
	                "Source": "/var/lib/docker/volumes/no-preload-041029/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-041029",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-041029",
	                "name.minikube.sigs.k8s.io": "no-preload-041029",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18588ba00177f5556f1d5ced3d847ab4a70cf86f42046bee341cb697a4e056a0",
	            "SandboxKey": "/var/run/docker/netns/18588ba00177",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34202"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34203"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34204"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-041029": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:ff:7a:0d:3f:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d249c184d92c757ccd210aec69d5acdf56f64a6ec2365db3e9108375c30dd5a",
	                    "EndpointID": "5f79e2d32b684030348019203eb6174025ca751651cb28f8fa499b42c2d5f37e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-041029",
	                        "8049b6a31f79"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041029 -n no-preload-041029
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041029 -n no-preload-041029: exit status 2 (370.353187ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-041029 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-041029 logs -n 25: (1.335534832s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-559701 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:26 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p default-k8s-diff-port-559701                                                                                                                                                                                                               │ default-k8s-diff-port-559701 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p disable-driver-mounts-305138                                                                                                                                                                                                               │ disable-driver-mounts-305138 │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ image   │ embed-certs-270206 image list --format=json                                                                                                                                                                                                   │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ pause   │ -p embed-certs-270206 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │                     │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ delete  │ -p embed-certs-270206                                                                                                                                                                                                                         │ embed-certs-270206           │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:27 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:27 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-135623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ stop    │ -p newest-cni-135623 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-135623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ start   │ -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-041029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ stop    │ -p no-preload-041029 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ image   │ newest-cni-135623 image list --format=json                                                                                                                                                                                                    │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ pause   │ -p newest-cni-135623 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ delete  │ -p newest-cni-135623                                                                                                                                                                                                                          │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ delete  │ -p newest-cni-135623                                                                                                                                                                                                                          │ newest-cni-135623            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ start   │ -p auto-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-640880                  │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-041029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:28 UTC │
	│ start   │ -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:28 UTC │ 20 Nov 25 22:29 UTC │
	│ image   │ no-preload-041029 image list --format=json                                                                                                                                                                                                    │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:30 UTC │ 20 Nov 25 22:30 UTC │
	│ pause   │ -p no-preload-041029 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-041029            │ jenkins │ v1.37.0 │ 20 Nov 25 22:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 22:28:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 22:28:46.875585 1050333 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:28:46.875809 1050333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:46.875832 1050333 out.go:374] Setting ErrFile to fd 2...
	I1120 22:28:46.875850 1050333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:28:46.876127 1050333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:28:46.876522 1050333 out.go:368] Setting JSON to false
	I1120 22:28:46.877443 1050333 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18652,"bootTime":1763659075,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:28:46.877529 1050333 start.go:143] virtualization:  
	I1120 22:28:46.881702 1050333 out.go:179] * [no-preload-041029] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:28:46.886132 1050333 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:28:46.886213 1050333 notify.go:221] Checking for updates...
	I1120 22:28:46.899044 1050333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:28:46.902342 1050333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:46.905629 1050333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:28:46.908787 1050333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:28:46.911923 1050333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:28:46.915519 1050333 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:46.916186 1050333 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:28:46.973963 1050333 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:28:46.974085 1050333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:47.069142 1050333 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-11-20 22:28:47.058765989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:47.069254 1050333 docker.go:319] overlay module found
	I1120 22:28:47.074853 1050333 out.go:179] * Using the docker driver based on existing profile
	I1120 22:28:47.078073 1050333 start.go:309] selected driver: docker
	I1120 22:28:47.078106 1050333 start.go:930] validating driver "docker" against &{Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:47.078206 1050333 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:28:47.078959 1050333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:28:47.186042 1050333 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-11-20 22:28:47.176730196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:28:47.186390 1050333 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:28:47.186424 1050333 cni.go:84] Creating CNI manager for ""
	I1120 22:28:47.186478 1050333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:47.186529 1050333 start.go:353] cluster config:
	{Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:47.189860 1050333 out.go:179] * Starting "no-preload-041029" primary control-plane node in "no-preload-041029" cluster
	I1120 22:28:47.195030 1050333 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 22:28:47.198022 1050333 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 22:28:47.200955 1050333 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:47.201056 1050333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 22:28:47.201096 1050333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json ...
	I1120 22:28:47.201413 1050333 cache.go:107] acquiring lock: {Name:mkfe8a3234fd2567b981ed2e943c252800f37788 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201498 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1120 22:28:47.201510 1050333 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.622µs
	I1120 22:28:47.201518 1050333 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1120 22:28:47.201531 1050333 cache.go:107] acquiring lock: {Name:mk5ddbac06bb4c58e0829e32dc3cac2e0f3d3484 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201569 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1120 22:28:47.201579 1050333 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 50.487µs
	I1120 22:28:47.201586 1050333 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1120 22:28:47.201596 1050333 cache.go:107] acquiring lock: {Name:mk6473ff5661413ee7b260344002f555ac817d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201628 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1120 22:28:47.201637 1050333 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.437µs
	I1120 22:28:47.201647 1050333 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1120 22:28:47.201657 1050333 cache.go:107] acquiring lock: {Name:mk452c1826f4ea2a7476e6cd709c98ef1de14eae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201687 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1120 22:28:47.201695 1050333 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 39.025µs
	I1120 22:28:47.201706 1050333 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1120 22:28:47.201716 1050333 cache.go:107] acquiring lock: {Name:mkc179cc367be18f686b3ff0d25d7c0a4d38107a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201745 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1120 22:28:47.201755 1050333 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 40.042µs
	I1120 22:28:47.201761 1050333 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1120 22:28:47.201770 1050333 cache.go:107] acquiring lock: {Name:mk2d31e05763b1401b87a3347e71140539ad5cd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201800 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1120 22:28:47.201809 1050333 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.082µs
	I1120 22:28:47.201815 1050333 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1120 22:28:47.201825 1050333 cache.go:107] acquiring lock: {Name:mk1e9e4e31f0a8424c64380df7184f5c5bff61db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201856 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1120 22:28:47.201863 1050333 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.869µs
	I1120 22:28:47.201873 1050333 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1120 22:28:47.201882 1050333 cache.go:107] acquiring lock: {Name:mk7bd038abefa117c730983c9f9ea84fc4100cef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.201913 1050333 cache.go:115] /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1120 22:28:47.201923 1050333 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 41.674µs
	I1120 22:28:47.201929 1050333 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1120 22:28:47.201935 1050333 cache.go:87] Successfully saved all images to host disk.
	I1120 22:28:47.222473 1050333 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 22:28:47.222494 1050333 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 22:28:47.222507 1050333 cache.go:243] Successfully downloaded all kic artifacts
	I1120 22:28:47.222531 1050333 start.go:360] acquireMachinesLock for no-preload-041029: {Name:mk272b44e31f3ea0985bee4020b0ba7b3af4d70d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 22:28:47.222603 1050333 start.go:364] duration metric: took 57.675µs to acquireMachinesLock for "no-preload-041029"
	I1120 22:28:47.222624 1050333 start.go:96] Skipping create...Using existing machine configuration
	I1120 22:28:47.222630 1050333 fix.go:54] fixHost starting: 
	I1120 22:28:47.222889 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:47.247452 1050333 fix.go:112] recreateIfNeeded on no-preload-041029: state=Stopped err=<nil>
	W1120 22:28:47.247483 1050333 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 22:28:44.861026 1049903 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 22:28:44.861268 1049903 start.go:159] libmachine.API.Create for "auto-640880" (driver="docker")
	I1120 22:28:44.861314 1049903 client.go:173] LocalClient.Create starting
	I1120 22:28:44.861383 1049903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem
	I1120 22:28:44.861422 1049903 main.go:143] libmachine: Decoding PEM data...
	I1120 22:28:44.861439 1049903 main.go:143] libmachine: Parsing certificate...
	I1120 22:28:44.861505 1049903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem
	I1120 22:28:44.861529 1049903 main.go:143] libmachine: Decoding PEM data...
	I1120 22:28:44.861542 1049903 main.go:143] libmachine: Parsing certificate...
	I1120 22:28:44.861948 1049903 cli_runner.go:164] Run: docker network inspect auto-640880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 22:28:44.877903 1049903 cli_runner.go:211] docker network inspect auto-640880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 22:28:44.877988 1049903 network_create.go:284] running [docker network inspect auto-640880] to gather additional debugging logs...
	I1120 22:28:44.878007 1049903 cli_runner.go:164] Run: docker network inspect auto-640880
	W1120 22:28:44.894593 1049903 cli_runner.go:211] docker network inspect auto-640880 returned with exit code 1
	I1120 22:28:44.894620 1049903 network_create.go:287] error running [docker network inspect auto-640880]: docker network inspect auto-640880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-640880 not found
	I1120 22:28:44.894632 1049903 network_create.go:289] output of [docker network inspect auto-640880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-640880 not found
	
	** /stderr **
	I1120 22:28:44.894744 1049903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:28:44.911180 1049903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad232b357b1b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:e5:2b:94:2e:bb} reservation:<nil>}
	I1120 22:28:44.911627 1049903 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6d47b47b5eb7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:61:6b:56:c9:db} reservation:<nil>}
	I1120 22:28:44.911875 1049903 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8999df1e8509 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:04:87:b7:55:e1} reservation:<nil>}
	I1120 22:28:44.912294 1049903 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ec7f0}
	I1120 22:28:44.912316 1049903 network_create.go:124] attempt to create docker network auto-640880 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1120 22:28:44.912371 1049903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-640880 auto-640880
	I1120 22:28:44.979995 1049903 network_create.go:108] docker network auto-640880 192.168.76.0/24 created
	I1120 22:28:44.980027 1049903 kic.go:121] calculated static IP "192.168.76.2" for the "auto-640880" container
	I1120 22:28:44.980113 1049903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 22:28:44.996188 1049903 cli_runner.go:164] Run: docker volume create auto-640880 --label name.minikube.sigs.k8s.io=auto-640880 --label created_by.minikube.sigs.k8s.io=true
	I1120 22:28:45.081736 1049903 oci.go:103] Successfully created a docker volume auto-640880
	I1120 22:28:45.081854 1049903 cli_runner.go:164] Run: docker run --rm --name auto-640880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-640880 --entrypoint /usr/bin/test -v auto-640880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 22:28:45.683369 1049903 oci.go:107] Successfully prepared a docker volume auto-640880
	I1120 22:28:45.683446 1049903 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:45.683459 1049903 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 22:28:45.683545 1049903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-640880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 22:28:47.250832 1050333 out.go:252] * Restarting existing docker container for "no-preload-041029" ...
	I1120 22:28:47.250949 1050333 cli_runner.go:164] Run: docker start no-preload-041029
	I1120 22:28:47.597200 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:47.619881 1050333 kic.go:430] container "no-preload-041029" state is running.
	I1120 22:28:47.620266 1050333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:28:47.651626 1050333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/config.json ...
	I1120 22:28:47.651888 1050333 machine.go:94] provisionDockerMachine start ...
	I1120 22:28:47.651949 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:47.678669 1050333 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:47.679032 1050333 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1120 22:28:47.679043 1050333 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:28:47.679992 1050333 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46530->127.0.0.1:34202: read: connection reset by peer
	I1120 22:28:50.874661 1050333 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-041029
	
	I1120 22:28:50.874687 1050333 ubuntu.go:182] provisioning hostname "no-preload-041029"
	I1120 22:28:50.874771 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:50.898032 1050333 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:50.898340 1050333 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1120 22:28:50.898357 1050333 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-041029 && echo "no-preload-041029" | sudo tee /etc/hostname
	I1120 22:28:51.098472 1050333 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-041029
	
	I1120 22:28:51.098719 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:51.159080 1050333 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:51.159414 1050333 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1120 22:28:51.159432 1050333 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-041029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-041029/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-041029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:28:51.351104 1050333 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:28:51.351133 1050333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:28:51.351168 1050333 ubuntu.go:190] setting up certificates
	I1120 22:28:51.351178 1050333 provision.go:84] configureAuth start
	I1120 22:28:51.351250 1050333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:28:51.420472 1050333 provision.go:143] copyHostCerts
	I1120 22:28:51.420543 1050333 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:28:51.420564 1050333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:28:51.420651 1050333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:28:51.420758 1050333 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:28:51.420770 1050333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:28:51.420799 1050333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:28:51.420864 1050333 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:28:51.420874 1050333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:28:51.420900 1050333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:28:51.420962 1050333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.no-preload-041029 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-041029]
	I1120 22:28:50.462448 1049903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-640880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.778860564s)
	I1120 22:28:50.462476 1049903 kic.go:203] duration metric: took 4.779014232s to extract preloaded images to volume ...
	W1120 22:28:50.462598 1049903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 22:28:50.462698 1049903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 22:28:50.556356 1049903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-640880 --name auto-640880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-640880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-640880 --network auto-640880 --ip 192.168.76.2 --volume auto-640880:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 22:28:50.923079 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Running}}
	I1120 22:28:50.944544 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:28:50.968532 1049903 cli_runner.go:164] Run: docker exec auto-640880 stat /var/lib/dpkg/alternatives/iptables
	I1120 22:28:51.033619 1049903 oci.go:144] the created container "auto-640880" has a running status.
	I1120 22:28:51.033646 1049903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa...
	I1120 22:28:51.524395 1049903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 22:28:51.573016 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:28:51.621977 1049903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 22:28:51.622000 1049903 kic_runner.go:114] Args: [docker exec --privileged auto-640880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 22:28:51.678357 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:28:51.699470 1049903 machine.go:94] provisionDockerMachine start ...
	I1120 22:28:51.699569 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:51.724610 1049903 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:51.724950 1049903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1120 22:28:51.724965 1049903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 22:28:51.725622 1049903 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 22:28:52.429236 1050333 provision.go:177] copyRemoteCerts
	I1120 22:28:52.429314 1050333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:28:52.429360 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:52.447110 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:52.550833 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:28:52.599068 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:28:52.629491 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 22:28:52.654913 1050333 provision.go:87] duration metric: took 1.303705784s to configureAuth
	I1120 22:28:52.654948 1050333 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:28:52.655175 1050333 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:52.655306 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:52.677138 1050333 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:52.677562 1050333 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1120 22:28:52.677578 1050333 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:28:53.115956 1050333 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:28:53.115984 1050333 machine.go:97] duration metric: took 5.464085244s to provisionDockerMachine
	I1120 22:28:53.115995 1050333 start.go:293] postStartSetup for "no-preload-041029" (driver="docker")
	I1120 22:28:53.116006 1050333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:28:53.116081 1050333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:28:53.116125 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:53.143646 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:53.251536 1050333 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:28:53.254768 1050333 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:28:53.254799 1050333 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:28:53.254811 1050333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:28:53.254869 1050333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:28:53.254957 1050333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:28:53.255094 1050333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:28:53.263280 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:53.280711 1050333 start.go:296] duration metric: took 164.699249ms for postStartSetup
	I1120 22:28:53.280805 1050333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:28:53.280857 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:53.298110 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:53.395855 1050333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:28:53.400956 1050333 fix.go:56] duration metric: took 6.178317856s for fixHost
	I1120 22:28:53.400983 1050333 start.go:83] releasing machines lock for "no-preload-041029", held for 6.178370443s
	I1120 22:28:53.401054 1050333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-041029
	I1120 22:28:53.421021 1050333 ssh_runner.go:195] Run: cat /version.json
	I1120 22:28:53.421046 1050333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:28:53.421084 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:53.421107 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:53.443258 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:53.455289 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:53.651741 1050333 ssh_runner.go:195] Run: systemctl --version
	I1120 22:28:53.658089 1050333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:28:53.694739 1050333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:28:53.699554 1050333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:28:53.699658 1050333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:28:53.708759 1050333 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 22:28:53.708845 1050333 start.go:496] detecting cgroup driver to use...
	I1120 22:28:53.708907 1050333 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:28:53.708988 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:28:53.724295 1050333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:28:53.737350 1050333 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:28:53.737463 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:28:53.753774 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:28:53.767201 1050333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:28:53.877453 1050333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:28:53.991722 1050333 docker.go:234] disabling docker service ...
	I1120 22:28:53.991791 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:28:54.008192 1050333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:28:54.023009 1050333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:28:54.145769 1050333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:28:54.283262 1050333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:28:54.298921 1050333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:28:54.313307 1050333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:28:54.313400 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.323056 1050333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:28:54.323125 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.333281 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.344299 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.353853 1050333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:28:54.362294 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.371935 1050333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.380727 1050333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:54.389474 1050333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:28:54.397191 1050333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:28:54.404547 1050333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:54.513905 1050333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:28:54.708369 1050333 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:28:54.708481 1050333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:28:54.712961 1050333 start.go:564] Will wait 60s for crictl version
	I1120 22:28:54.713070 1050333 ssh_runner.go:195] Run: which crictl
	I1120 22:28:54.717130 1050333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:28:54.762764 1050333 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:28:54.762942 1050333 ssh_runner.go:195] Run: crio --version
	I1120 22:28:54.814802 1050333 ssh_runner.go:195] Run: crio --version
	I1120 22:28:54.850050 1050333 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:28:54.853052 1050333 cli_runner.go:164] Run: docker network inspect no-preload-041029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:28:54.875844 1050333 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 22:28:54.879963 1050333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:54.893033 1050333 kubeadm.go:884] updating cluster {Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:28:54.893151 1050333 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:54.893196 1050333 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:54.937492 1050333 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:54.937528 1050333 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:28:54.937549 1050333 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1120 22:28:54.937662 1050333 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-041029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:28:54.937766 1050333 ssh_runner.go:195] Run: crio config
	I1120 22:28:55.014066 1050333 cni.go:84] Creating CNI manager for ""
	I1120 22:28:55.014153 1050333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:55.014190 1050333 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:28:55.014256 1050333 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-041029 NodeName:no-preload-041029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:28:55.014465 1050333 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-041029"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:28:55.014593 1050333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:28:55.025016 1050333 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:28:55.025106 1050333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:28:55.034051 1050333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1120 22:28:55.049630 1050333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:28:55.065414 1050333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 22:28:55.081442 1050333 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:28:55.089685 1050333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:55.100952 1050333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:55.246906 1050333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:55.263154 1050333 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029 for IP: 192.168.85.2
	I1120 22:28:55.263178 1050333 certs.go:195] generating shared ca certs ...
	I1120 22:28:55.263196 1050333 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:55.263342 1050333 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:28:55.263404 1050333 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:28:55.263416 1050333 certs.go:257] generating profile certs ...
	I1120 22:28:55.263541 1050333 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.key
	I1120 22:28:55.263612 1050333 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key.20ef11a6
	I1120 22:28:55.263658 1050333 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.key
	I1120 22:28:55.263773 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:28:55.263806 1050333 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:28:55.263820 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:28:55.263846 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:28:55.263873 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:28:55.263897 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:28:55.263943 1050333 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:55.264578 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:28:55.315139 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:28:55.384309 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:28:55.479609 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:28:55.548932 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 22:28:55.578866 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 22:28:55.603567 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:28:55.622604 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:28:55.639360 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:28:55.656783 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:28:55.674169 1050333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:28:55.694865 1050333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:28:55.709310 1050333 ssh_runner.go:195] Run: openssl version
	I1120 22:28:55.716117 1050333 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:28:55.724498 1050333 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:28:55.733344 1050333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:28:55.737309 1050333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:28:55.737371 1050333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:28:55.779270 1050333 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:28:55.786548 1050333 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:28:55.793529 1050333 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:28:55.800668 1050333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:28:55.805029 1050333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:28:55.805101 1050333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:28:55.847457 1050333 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:28:55.855830 1050333 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:55.862971 1050333 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:28:55.870398 1050333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:55.874783 1050333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:55.874891 1050333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:28:55.917249 1050333 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:28:55.924891 1050333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:28:55.929113 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 22:28:56.014380 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 22:28:56.100019 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 22:28:56.183635 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 22:28:56.259676 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 22:28:56.392223 1050333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 22:28:56.491899 1050333 kubeadm.go:401] StartCluster: {Name:no-preload-041029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-041029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:28:56.491983 1050333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:28:56.492054 1050333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:28:56.550231 1050333 cri.go:89] found id: "e42bdea342f42392b071351be610744a76403aa1460991517dc30c6622b12fab"
	I1120 22:28:56.550255 1050333 cri.go:89] found id: "0962480e895b00f5e5f7566371faa096c72149db953c264531067463575412d0"
	I1120 22:28:56.550260 1050333 cri.go:89] found id: "f023b4b884cd598958f1afa19540045fe5a0c2be9cb914f11b375b8788914863"
	I1120 22:28:56.550264 1050333 cri.go:89] found id: "1ed9b7cf8d08106500bd207cf6aeb94655fa86b8f7e5a5e12ea8481115f296b6"
	I1120 22:28:56.550267 1050333 cri.go:89] found id: ""
	I1120 22:28:56.550333 1050333 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 22:28:56.585008 1050333 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T22:28:56Z" level=error msg="open /run/runc: no such file or directory"
	I1120 22:28:56.585095 1050333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:28:56.603572 1050333 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 22:28:56.603596 1050333 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 22:28:56.603651 1050333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 22:28:56.615405 1050333 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 22:28:56.615896 1050333 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-041029" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:56.616055 1050333 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-834992/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-041029" cluster setting kubeconfig missing "no-preload-041029" context setting]
	I1120 22:28:56.616448 1050333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:56.617968 1050333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 22:28:56.630718 1050333 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1120 22:28:56.630748 1050333 kubeadm.go:602] duration metric: took 27.146417ms to restartPrimaryControlPlane
	I1120 22:28:56.630757 1050333 kubeadm.go:403] duration metric: took 138.867188ms to StartCluster
	I1120 22:28:56.630774 1050333 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:56.630830 1050333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:28:56.631464 1050333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:28:56.631665 1050333 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:28:56.632134 1050333 config.go:182] Loaded profile config "no-preload-041029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:56.632198 1050333 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:28:56.632266 1050333 addons.go:70] Setting storage-provisioner=true in profile "no-preload-041029"
	I1120 22:28:56.632284 1050333 addons.go:239] Setting addon storage-provisioner=true in "no-preload-041029"
	W1120 22:28:56.632295 1050333 addons.go:248] addon storage-provisioner should already be in state true
	I1120 22:28:56.632319 1050333 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:28:56.632771 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:56.632948 1050333 addons.go:70] Setting dashboard=true in profile "no-preload-041029"
	I1120 22:28:56.632994 1050333 addons.go:239] Setting addon dashboard=true in "no-preload-041029"
	W1120 22:28:56.633021 1050333 addons.go:248] addon dashboard should already be in state true
	I1120 22:28:56.633063 1050333 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:28:56.633514 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:56.635922 1050333 addons.go:70] Setting default-storageclass=true in profile "no-preload-041029"
	I1120 22:28:56.636051 1050333 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-041029"
	I1120 22:28:56.636565 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:56.639874 1050333 out.go:179] * Verifying Kubernetes components...
	I1120 22:28:56.643083 1050333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:56.675012 1050333 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 22:28:56.680484 1050333 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 22:28:56.683297 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 22:28:56.683321 1050333 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 22:28:56.683410 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:56.688065 1050333 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:28:56.690934 1050333 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:56.690956 1050333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:28:56.691034 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:56.697844 1050333 addons.go:239] Setting addon default-storageclass=true in "no-preload-041029"
	W1120 22:28:56.697873 1050333 addons.go:248] addon default-storageclass should already be in state true
	I1120 22:28:56.697899 1050333 host.go:66] Checking if "no-preload-041029" exists ...
	I1120 22:28:56.698301 1050333 cli_runner.go:164] Run: docker container inspect no-preload-041029 --format={{.State.Status}}
	I1120 22:28:56.726748 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:56.735233 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:56.750048 1050333 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:56.750069 1050333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:28:56.750135 1050333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-041029
	I1120 22:28:56.782067 1050333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/no-preload-041029/id_rsa Username:docker}
	I1120 22:28:54.875072 1049903 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-640880
	
	I1120 22:28:54.875105 1049903 ubuntu.go:182] provisioning hostname "auto-640880"
	I1120 22:28:54.875176 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:54.898043 1049903 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:54.898342 1049903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1120 22:28:54.898354 1049903 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-640880 && echo "auto-640880" | sudo tee /etc/hostname
	I1120 22:28:55.080850 1049903 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-640880
	
	I1120 22:28:55.080947 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:55.104479 1049903 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:55.104782 1049903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1120 22:28:55.104799 1049903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-640880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-640880/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-640880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 22:28:55.271770 1049903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 22:28:55.271799 1049903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-834992/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-834992/.minikube}
	I1120 22:28:55.271824 1049903 ubuntu.go:190] setting up certificates
	I1120 22:28:55.271843 1049903 provision.go:84] configureAuth start
	I1120 22:28:55.271913 1049903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-640880
	I1120 22:28:55.292669 1049903 provision.go:143] copyHostCerts
	I1120 22:28:55.292730 1049903 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem, removing ...
	I1120 22:28:55.292739 1049903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem
	I1120 22:28:55.292839 1049903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/ca.pem (1078 bytes)
	I1120 22:28:55.292933 1049903 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem, removing ...
	I1120 22:28:55.292938 1049903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem
	I1120 22:28:55.292963 1049903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/cert.pem (1123 bytes)
	I1120 22:28:55.293022 1049903 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem, removing ...
	I1120 22:28:55.293026 1049903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem
	I1120 22:28:55.293048 1049903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-834992/.minikube/key.pem (1679 bytes)
	I1120 22:28:55.293102 1049903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem org=jenkins.auto-640880 san=[127.0.0.1 192.168.76.2 auto-640880 localhost minikube]
	I1120 22:28:56.135450 1049903 provision.go:177] copyRemoteCerts
	I1120 22:28:56.135524 1049903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 22:28:56.135584 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:56.155231 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:56.268691 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 22:28:56.301643 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1120 22:28:56.335802 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 22:28:56.367801 1049903 provision.go:87] duration metric: took 1.095943238s to configureAuth
	I1120 22:28:56.367825 1049903 ubuntu.go:206] setting minikube options for container-runtime
	I1120 22:28:56.368009 1049903 config.go:182] Loaded profile config "auto-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:28:56.368111 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:56.390448 1049903 main.go:143] libmachine: Using SSH client type: native
	I1120 22:28:56.390754 1049903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1120 22:28:56.390771 1049903 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 22:28:56.850892 1049903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 22:28:56.850921 1049903 machine.go:97] duration metric: took 5.151431577s to provisionDockerMachine
	I1120 22:28:56.850931 1049903 client.go:176] duration metric: took 11.989606002s to LocalClient.Create
	I1120 22:28:56.850944 1049903 start.go:167] duration metric: took 11.989678167s to libmachine.API.Create "auto-640880"
	I1120 22:28:56.850951 1049903 start.go:293] postStartSetup for "auto-640880" (driver="docker")
	I1120 22:28:56.850961 1049903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 22:28:56.851048 1049903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 22:28:56.851090 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:56.884017 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:57.017376 1049903 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 22:28:57.026925 1049903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 22:28:57.026956 1049903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 22:28:57.026968 1049903 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/addons for local assets ...
	I1120 22:28:57.027082 1049903 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-834992/.minikube/files for local assets ...
	I1120 22:28:57.027174 1049903 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem -> 8368522.pem in /etc/ssl/certs
	I1120 22:28:57.027287 1049903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 22:28:57.042311 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:28:57.069158 1049903 start.go:296] duration metric: took 218.191768ms for postStartSetup
	I1120 22:28:57.069546 1049903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-640880
	I1120 22:28:57.099422 1049903 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/config.json ...
	I1120 22:28:57.099692 1049903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:28:57.099740 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:57.125058 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:57.236381 1049903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 22:28:57.243597 1049903 start.go:128] duration metric: took 12.385866674s to createHost
	I1120 22:28:57.243621 1049903 start.go:83] releasing machines lock for "auto-640880", held for 12.386006458s
	I1120 22:28:57.243692 1049903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-640880
	I1120 22:28:57.268653 1049903 ssh_runner.go:195] Run: cat /version.json
	I1120 22:28:57.268713 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:57.268882 1049903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 22:28:57.268951 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:28:57.301045 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:57.312731 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:28:57.411645 1049903 ssh_runner.go:195] Run: systemctl --version
	I1120 22:28:57.573405 1049903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 22:28:57.641652 1049903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 22:28:57.652155 1049903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 22:28:57.652240 1049903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 22:28:57.695406 1049903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 22:28:57.695432 1049903 start.go:496] detecting cgroup driver to use...
	I1120 22:28:57.695473 1049903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 22:28:57.695538 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 22:28:57.724795 1049903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 22:28:57.744122 1049903 docker.go:218] disabling cri-docker service (if available) ...
	I1120 22:28:57.744200 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 22:28:57.773197 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 22:28:57.808041 1049903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 22:28:58.043900 1049903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 22:28:58.265449 1049903 docker.go:234] disabling docker service ...
	I1120 22:28:58.265556 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 22:28:58.306569 1049903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 22:28:58.339318 1049903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 22:28:58.572526 1049903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 22:28:58.804459 1049903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 22:28:58.836917 1049903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 22:28:58.868813 1049903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 22:28:58.868933 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.884482 1049903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 22:28:58.884600 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.897530 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.908460 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.926720 1049903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 22:28:58.937528 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.948386 1049903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.968284 1049903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 22:28:58.988499 1049903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 22:28:58.997052 1049903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 22:28:59.014305 1049903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:28:59.243331 1049903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 22:28:59.496139 1049903 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 22:28:59.496262 1049903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 22:28:59.503527 1049903 start.go:564] Will wait 60s for crictl version
	I1120 22:28:59.503648 1049903 ssh_runner.go:195] Run: which crictl
	I1120 22:28:59.511793 1049903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 22:28:59.555814 1049903 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 22:28:59.555970 1049903 ssh_runner.go:195] Run: crio --version
	I1120 22:28:59.608583 1049903 ssh_runner.go:195] Run: crio --version
	I1120 22:28:59.661226 1049903 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 22:28:57.012188 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 22:28:57.012219 1050333 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 22:28:57.051569 1050333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:28:57.072533 1050333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:28:57.099956 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 22:28:57.099986 1050333 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 22:28:57.122439 1050333 node_ready.go:35] waiting up to 6m0s for node "no-preload-041029" to be "Ready" ...
	I1120 22:28:57.167291 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 22:28:57.167311 1050333 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 22:28:57.186309 1050333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:28:57.189761 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 22:28:57.189780 1050333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 22:28:57.203316 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 22:28:57.203337 1050333 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 22:28:57.216795 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 22:28:57.216869 1050333 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 22:28:57.335602 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 22:28:57.335624 1050333 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 22:28:57.404367 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 22:28:57.404388 1050333 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 22:28:57.463075 1050333 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:28:57.463096 1050333 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 22:28:57.484333 1050333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 22:28:59.664405 1049903 cli_runner.go:164] Run: docker network inspect auto-640880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 22:28:59.688694 1049903 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 22:28:59.692764 1049903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:28:59.709392 1049903 kubeadm.go:884] updating cluster {Name:auto-640880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-640880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 22:28:59.709515 1049903 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 22:28:59.709574 1049903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:59.777321 1049903 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:59.777347 1049903 crio.go:433] Images already preloaded, skipping extraction
	I1120 22:28:59.777403 1049903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 22:28:59.828630 1049903 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 22:28:59.828656 1049903 cache_images.go:86] Images are preloaded, skipping loading
	I1120 22:28:59.828665 1049903 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1120 22:28:59.828756 1049903 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-640880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-640880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 22:28:59.828856 1049903 ssh_runner.go:195] Run: crio config
	I1120 22:28:59.937215 1049903 cni.go:84] Creating CNI manager for ""
	I1120 22:28:59.937247 1049903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:28:59.937264 1049903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 22:28:59.937289 1049903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-640880 NodeName:auto-640880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 22:28:59.937433 1049903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-640880"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 22:28:59.937518 1049903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 22:28:59.946748 1049903 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 22:28:59.946845 1049903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 22:28:59.961540 1049903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1120 22:28:59.976750 1049903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 22:28:59.997587 1049903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1120 22:29:00.022343 1049903 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 22:29:00.047600 1049903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 22:29:00.149327 1049903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:29:00.419288 1049903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:29:00.447971 1049903 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880 for IP: 192.168.76.2
	I1120 22:29:00.447995 1049903 certs.go:195] generating shared ca certs ...
	I1120 22:29:00.448012 1049903 certs.go:227] acquiring lock for ca certs: {Name:mkae65486a8ee3cbe77463f7f1791e48b0f8cb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.448161 1049903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key
	I1120 22:29:00.448217 1049903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key
	I1120 22:29:00.448239 1049903 certs.go:257] generating profile certs ...
	I1120 22:29:00.448323 1049903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.key
	I1120 22:29:00.448341 1049903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt with IP's: []
	I1120 22:29:00.758657 1049903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt ...
	I1120 22:29:00.758690 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: {Name:mk90d4fb34cbe7c69e3bbf6c05cb072350bd032a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.758878 1049903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.key ...
	I1120 22:29:00.758894 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.key: {Name:mk53abed259f75db5a291342c90e4e112df02021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.758998 1049903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key.2c58ae48
	I1120 22:29:00.759022 1049903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt.2c58ae48 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1120 22:29:00.859616 1049903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt.2c58ae48 ...
	I1120 22:29:00.859646 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt.2c58ae48: {Name:mk642baf5a111a12d0f0d63615b99c5469178f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.859817 1049903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key.2c58ae48 ...
	I1120 22:29:00.859832 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key.2c58ae48: {Name:mk280b679d983240eb64192783e31425cb0b6544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:00.859983 1049903 certs.go:382] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt.2c58ae48 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt
	I1120 22:29:00.860090 1049903 certs.go:386] copying /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key.2c58ae48 -> /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key
	I1120 22:29:00.860152 1049903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.key
	I1120 22:29:00.860172 1049903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.crt with IP's: []
	I1120 22:29:01.254424 1049903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.crt ...
	I1120 22:29:01.254456 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.crt: {Name:mk8d2462da535744bcf7c352150cedc78a8fed08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:01.254668 1049903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.key ...
	I1120 22:29:01.254682 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.key: {Name:mkdf81ac8fd20690459059ae6a3069d670325518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:01.254890 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem (1338 bytes)
	W1120 22:29:01.254934 1049903 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852_empty.pem, impossibly tiny 0 bytes
	I1120 22:29:01.254952 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 22:29:01.254991 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/ca.pem (1078 bytes)
	I1120 22:29:01.255018 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/cert.pem (1123 bytes)
	I1120 22:29:01.255047 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/certs/key.pem (1679 bytes)
	I1120 22:29:01.255093 1049903 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem (1708 bytes)
	I1120 22:29:01.255738 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 22:29:01.307799 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 22:29:01.355717 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 22:29:01.394344 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1120 22:29:01.428805 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1120 22:29:01.456456 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 22:29:01.489151 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 22:29:01.515089 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 22:29:01.542389 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/ssl/certs/8368522.pem --> /usr/share/ca-certificates/8368522.pem (1708 bytes)
	I1120 22:29:01.572758 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 22:29:01.600767 1049903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-834992/.minikube/certs/836852.pem --> /usr/share/ca-certificates/836852.pem (1338 bytes)
	I1120 22:29:01.636617 1049903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 22:29:01.659167 1049903 ssh_runner.go:195] Run: openssl version
	I1120 22:29:01.676600 1049903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8368522.pem
	I1120 22:29:01.689018 1049903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8368522.pem /etc/ssl/certs/8368522.pem
	I1120 22:29:01.704056 1049903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8368522.pem
	I1120 22:29:01.708529 1049903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 21:18 /usr/share/ca-certificates/8368522.pem
	I1120 22:29:01.708650 1049903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8368522.pem
	I1120 22:29:01.775871 1049903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 22:29:01.784450 1049903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8368522.pem /etc/ssl/certs/3ec20f2e.0
	I1120 22:29:01.797962 1049903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:29:01.810106 1049903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 22:29:01.819242 1049903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:29:01.824760 1049903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:29:01.824841 1049903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 22:29:01.878564 1049903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 22:29:01.889545 1049903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 22:29:01.905229 1049903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/836852.pem
	I1120 22:29:01.917318 1049903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/836852.pem /etc/ssl/certs/836852.pem
	I1120 22:29:01.931804 1049903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836852.pem
	I1120 22:29:01.939507 1049903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 21:18 /usr/share/ca-certificates/836852.pem
	I1120 22:29:01.939584 1049903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836852.pem
	I1120 22:29:01.997692 1049903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 22:29:02.007263 1049903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/836852.pem /etc/ssl/certs/51391683.0
	I1120 22:29:02.017114 1049903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 22:29:02.023773 1049903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 22:29:02.023842 1049903 kubeadm.go:401] StartCluster: {Name:auto-640880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-640880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 22:29:02.023919 1049903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 22:29:02.023989 1049903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 22:29:02.085413 1049903 cri.go:89] found id: ""
	I1120 22:29:02.085505 1049903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 22:29:02.100099 1049903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 22:29:02.113731 1049903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 22:29:02.113798 1049903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 22:29:02.126478 1049903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 22:29:02.126498 1049903 kubeadm.go:158] found existing configuration files:
	
	I1120 22:29:02.126549 1049903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 22:29:02.134938 1049903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 22:29:02.135092 1049903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 22:29:02.147376 1049903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 22:29:02.160754 1049903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 22:29:02.160822 1049903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 22:29:02.172465 1049903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 22:29:02.187596 1049903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 22:29:02.187664 1049903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 22:29:02.212846 1049903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 22:29:02.235402 1049903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 22:29:02.235525 1049903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 22:29:02.253624 1049903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 22:29:02.364408 1049903 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 22:29:02.364467 1049903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 22:29:02.415753 1049903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 22:29:02.415840 1049903 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 22:29:02.415877 1049903 kubeadm.go:319] OS: Linux
	I1120 22:29:02.415932 1049903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 22:29:02.415984 1049903 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 22:29:02.416034 1049903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 22:29:02.416084 1049903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 22:29:02.416134 1049903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 22:29:02.416184 1049903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 22:29:02.416232 1049903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 22:29:02.416282 1049903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 22:29:02.416330 1049903 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 22:29:02.527409 1049903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 22:29:02.527529 1049903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 22:29:02.527618 1049903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 22:29:02.551348 1049903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 22:29:02.556995 1049903 out.go:252]   - Generating certificates and keys ...
	I1120 22:29:02.557092 1049903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 22:29:02.557159 1049903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 22:29:03.091226 1049903 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 22:29:03.803480 1049903 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 22:29:03.978323 1050333 node_ready.go:49] node "no-preload-041029" is "Ready"
	I1120 22:29:03.978361 1050333 node_ready.go:38] duration metric: took 6.855862285s for node "no-preload-041029" to be "Ready" ...
	I1120 22:29:03.978376 1050333 api_server.go:52] waiting for apiserver process to appear ...
	I1120 22:29:03.978440 1050333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:29:07.068205 1050333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.995586676s)
	I1120 22:29:07.068270 1050333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.881942632s)
	I1120 22:29:07.068605 1050333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.584242906s)
	I1120 22:29:07.068860 1050333 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.090396824s)
	I1120 22:29:07.068888 1050333 api_server.go:72] duration metric: took 10.437202164s to wait for apiserver process to appear ...
	I1120 22:29:07.068895 1050333 api_server.go:88] waiting for apiserver healthz status ...
	I1120 22:29:07.068911 1050333 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 22:29:07.071877 1050333 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-041029 addons enable metrics-server
	
	I1120 22:29:07.084783 1050333 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 22:29:07.086657 1050333 api_server.go:141] control plane version: v1.34.1
	I1120 22:29:07.086689 1050333 api_server.go:131] duration metric: took 17.787297ms to wait for apiserver health ...
	I1120 22:29:07.086698 1050333 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 22:29:07.095411 1050333 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1120 22:29:05.143385 1049903 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 22:29:05.476155 1049903 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 22:29:05.600076 1049903 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 22:29:05.600745 1049903 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-640880 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 22:29:07.152078 1049903 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 22:29:07.152593 1049903 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-640880 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 22:29:07.483605 1049903 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 22:29:07.815463 1049903 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 22:29:08.274737 1049903 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 22:29:08.275078 1049903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 22:29:08.483583 1049903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 22:29:08.605499 1049903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 22:29:08.774768 1049903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 22:29:09.135567 1049903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 22:29:09.281598 1049903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 22:29:09.282211 1049903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 22:29:09.290357 1049903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 22:29:09.293945 1049903 out.go:252]   - Booting up control plane ...
	I1120 22:29:09.294057 1049903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 22:29:09.294138 1049903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 22:29:09.294208 1049903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 22:29:09.318381 1049903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 22:29:09.318629 1049903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 22:29:09.324368 1049903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 22:29:09.324767 1049903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 22:29:09.324858 1049903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 22:29:09.471538 1049903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 22:29:09.471735 1049903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 22:29:07.096295 1050333 system_pods.go:59] 8 kube-system pods found
	I1120 22:29:07.096326 1050333 system_pods.go:61] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:29:07.096335 1050333 system_pods.go:61] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:29:07.096341 1050333 system_pods.go:61] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:29:07.096354 1050333 system_pods.go:61] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:29:07.096361 1050333 system_pods.go:61] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:29:07.096367 1050333 system_pods.go:61] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:29:07.096374 1050333 system_pods.go:61] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:29:07.096379 1050333 system_pods.go:61] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Running
	I1120 22:29:07.096384 1050333 system_pods.go:74] duration metric: took 9.681453ms to wait for pod list to return data ...
	I1120 22:29:07.096392 1050333 default_sa.go:34] waiting for default service account to be created ...
	I1120 22:29:07.098549 1050333 addons.go:515] duration metric: took 10.466348376s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 22:29:07.100759 1050333 default_sa.go:45] found service account: "default"
	I1120 22:29:07.100783 1050333 default_sa.go:55] duration metric: took 4.384778ms for default service account to be created ...
	I1120 22:29:07.100797 1050333 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 22:29:07.105004 1050333 system_pods.go:86] 8 kube-system pods found
	I1120 22:29:07.105112 1050333 system_pods.go:89] "coredns-66bc5c9577-6dbgj" [c0fcde6b-aaaa-4f14-9417-59f3222dbed0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 22:29:07.105165 1050333 system_pods.go:89] "etcd-no-preload-041029" [06032ad4-ec63-4d95-8f91-e36730bd3606] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 22:29:07.105189 1050333 system_pods.go:89] "kindnet-2fs8p" [2d930946-643e-4c53-84fc-d1f2bc7882f3] Running
	I1120 22:29:07.105218 1050333 system_pods.go:89] "kube-apiserver-no-preload-041029" [0c693809-7a46-42f0-bda5-f6e99aac0f2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 22:29:07.105260 1050333 system_pods.go:89] "kube-controller-manager-no-preload-041029" [fe5d47f3-e8c5-4cb7-b5db-16562eb7e6e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 22:29:07.105294 1050333 system_pods.go:89] "kube-proxy-n78zb" [f3bbf58f-77ab-4e32-b0df-64ae33d7674d] Running
	I1120 22:29:07.105340 1050333 system_pods.go:89] "kube-scheduler-no-preload-041029" [d7ad8229-d07b-4b00-bcdd-1222e31497f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 22:29:07.105364 1050333 system_pods.go:89] "storage-provisioner" [48ce6d51-0b32-4396-9e66-ce78a12fe4da] Running
	I1120 22:29:07.105392 1050333 system_pods.go:126] duration metric: took 4.587965ms to wait for k8s-apps to be running ...
	I1120 22:29:07.105436 1050333 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 22:29:07.105556 1050333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:29:07.133360 1050333 system_svc.go:56] duration metric: took 27.91368ms WaitForService to wait for kubelet
	I1120 22:29:07.133473 1050333 kubeadm.go:587] duration metric: took 10.501779872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 22:29:07.133512 1050333 node_conditions.go:102] verifying NodePressure condition ...
	I1120 22:29:07.139028 1050333 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 22:29:07.139134 1050333 node_conditions.go:123] node cpu capacity is 2
	I1120 22:29:07.139164 1050333 node_conditions.go:105] duration metric: took 5.609032ms to run NodePressure ...
	I1120 22:29:07.139210 1050333 start.go:242] waiting for startup goroutines ...
	I1120 22:29:07.139237 1050333 start.go:247] waiting for cluster config update ...
	I1120 22:29:07.139287 1050333 start.go:256] writing updated cluster config ...
	I1120 22:29:07.139773 1050333 ssh_runner.go:195] Run: rm -f paused
	I1120 22:29:07.149742 1050333 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:29:07.155456 1050333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 22:29:09.185073 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:11.662917 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:10.470460 1049903 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00166109s
	I1120 22:29:10.470579 1049903 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 22:29:10.470667 1049903 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1120 22:29:10.470763 1049903 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 22:29:10.470847 1049903 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1120 22:29:13.667517 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:16.164074 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:15.828858 1049903 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.358595078s
	I1120 22:29:19.040497 1049903 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.570637319s
	I1120 22:29:20.972014 1049903 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.501962676s
	I1120 22:29:20.993298 1049903 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 22:29:21.018729 1049903 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 22:29:21.040956 1049903 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 22:29:21.041450 1049903 kubeadm.go:319] [mark-control-plane] Marking the node auto-640880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 22:29:21.060481 1049903 kubeadm.go:319] [bootstrap-token] Using token: rwnehs.sqap5qw5j7cco1yz
	W1120 22:29:18.661798 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:20.662689 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:21.063593 1049903 out.go:252]   - Configuring RBAC rules ...
	I1120 22:29:21.063715 1049903 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 22:29:21.069350 1049903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 22:29:21.085918 1049903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 22:29:21.093124 1049903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 22:29:21.098456 1049903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 22:29:21.103552 1049903 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 22:29:21.382530 1049903 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 22:29:21.853611 1049903 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 22:29:22.401100 1049903 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 22:29:22.403510 1049903 kubeadm.go:319] 
	I1120 22:29:22.403599 1049903 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 22:29:22.403607 1049903 kubeadm.go:319] 
	I1120 22:29:22.403685 1049903 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 22:29:22.403691 1049903 kubeadm.go:319] 
	I1120 22:29:22.403716 1049903 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 22:29:22.406693 1049903 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 22:29:22.406755 1049903 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 22:29:22.406761 1049903 kubeadm.go:319] 
	I1120 22:29:22.406815 1049903 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 22:29:22.406819 1049903 kubeadm.go:319] 
	I1120 22:29:22.406867 1049903 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 22:29:22.406872 1049903 kubeadm.go:319] 
	I1120 22:29:22.406923 1049903 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 22:29:22.407015 1049903 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 22:29:22.407091 1049903 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 22:29:22.407096 1049903 kubeadm.go:319] 
	I1120 22:29:22.407461 1049903 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 22:29:22.407548 1049903 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 22:29:22.407554 1049903 kubeadm.go:319] 
	I1120 22:29:22.407889 1049903 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rwnehs.sqap5qw5j7cco1yz \
	I1120 22:29:22.407999 1049903 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 \
	I1120 22:29:22.408242 1049903 kubeadm.go:319] 	--control-plane 
	I1120 22:29:22.408264 1049903 kubeadm.go:319] 
	I1120 22:29:22.408567 1049903 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 22:29:22.408578 1049903 kubeadm.go:319] 
	I1120 22:29:22.408903 1049903 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rwnehs.sqap5qw5j7cco1yz \
	I1120 22:29:22.409196 1049903 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:02163999c49d3a9d636e89a7ecab487af228723c1a8e7a89bb8c14b8cccaeb24 
	I1120 22:29:22.432719 1049903 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 22:29:22.432956 1049903 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 22:29:22.433066 1049903 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 22:29:22.433081 1049903 cni.go:84] Creating CNI manager for ""
	I1120 22:29:22.433088 1049903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 22:29:22.437136 1049903 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 22:29:22.440483 1049903 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 22:29:22.462313 1049903 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 22:29:22.462336 1049903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 22:29:22.531123 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 22:29:23.761761 1049903 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.230554549s)
	I1120 22:29:23.761799 1049903 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 22:29:23.761908 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:23.762001 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-640880 minikube.k8s.io/updated_at=2025_11_20T22_29_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=auto-640880 minikube.k8s.io/primary=true
	I1120 22:29:24.050201 1049903 ops.go:34] apiserver oom_adj: -16
	I1120 22:29:24.050301 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:24.550804 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1120 22:29:22.667175 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:25.161524 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:25.050454 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:25.550448 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:26.050919 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:26.550922 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:27.051017 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:27.551270 1049903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 22:29:27.665671 1049903 kubeadm.go:1114] duration metric: took 3.903805301s to wait for elevateKubeSystemPrivileges
	I1120 22:29:27.665698 1049903 kubeadm.go:403] duration metric: took 25.641861801s to StartCluster
	I1120 22:29:27.665715 1049903 settings.go:142] acquiring lock: {Name:mk4198de6ca26291dfb55b0c7ca994d12ee6408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:27.665785 1049903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:29:27.666768 1049903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-834992/kubeconfig: {Name:mk5cc2e8ca448154a81a947ec09c396f055d9772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 22:29:27.666992 1049903 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 22:29:27.667131 1049903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 22:29:27.667350 1049903 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 22:29:27.667417 1049903 config.go:182] Loaded profile config "auto-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:29:27.667435 1049903 addons.go:70] Setting storage-provisioner=true in profile "auto-640880"
	I1120 22:29:27.667459 1049903 addons.go:70] Setting default-storageclass=true in profile "auto-640880"
	I1120 22:29:27.667463 1049903 addons.go:239] Setting addon storage-provisioner=true in "auto-640880"
	I1120 22:29:27.667470 1049903 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-640880"
	I1120 22:29:27.667489 1049903 host.go:66] Checking if "auto-640880" exists ...
	I1120 22:29:27.667774 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:29:27.667998 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:29:27.671487 1049903 out.go:179] * Verifying Kubernetes components...
	I1120 22:29:27.674628 1049903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 22:29:27.726045 1049903 addons.go:239] Setting addon default-storageclass=true in "auto-640880"
	I1120 22:29:27.726084 1049903 host.go:66] Checking if "auto-640880" exists ...
	I1120 22:29:27.726493 1049903 cli_runner.go:164] Run: docker container inspect auto-640880 --format={{.State.Status}}
	I1120 22:29:27.748403 1049903 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 22:29:27.751588 1049903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:29:27.751614 1049903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 22:29:27.751682 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:29:27.761149 1049903 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 22:29:27.761173 1049903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 22:29:27.761235 1049903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-640880
	I1120 22:29:27.787655 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:29:27.803285 1049903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/auto-640880/id_rsa Username:docker}
	I1120 22:29:28.173260 1049903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 22:29:28.173408 1049903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 22:29:28.245635 1049903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 22:29:28.260663 1049903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 22:29:28.795398 1049903 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 22:29:28.796667 1049903 node_ready.go:35] waiting up to 15m0s for node "auto-640880" to be "Ready" ...
	I1120 22:29:29.123228 1049903 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 22:29:29.126018 1049903 addons.go:515] duration metric: took 1.458658976s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 22:29:29.301033 1049903 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-640880" context rescaled to 1 replicas
	W1120 22:29:27.162277 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:29.661151 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:31.661406 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:30.799811 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:33.299944 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:34.161702 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:36.660881 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:35.300521 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:37.799781 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:38.661289 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:41.160991 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:39.800161 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:42.300257 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:43.161366 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:45.163365 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	W1120 22:29:47.661660 1050333 pod_ready.go:104] pod "coredns-66bc5c9577-6dbgj" is not "Ready", error: <nil>
	I1120 22:29:48.161778 1050333 pod_ready.go:94] pod "coredns-66bc5c9577-6dbgj" is "Ready"
	I1120 22:29:48.161812 1050333 pod_ready.go:86] duration metric: took 41.006271316s for pod "coredns-66bc5c9577-6dbgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.164719 1050333 pod_ready.go:83] waiting for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.169474 1050333 pod_ready.go:94] pod "etcd-no-preload-041029" is "Ready"
	I1120 22:29:48.169551 1050333 pod_ready.go:86] duration metric: took 4.79957ms for pod "etcd-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.171885 1050333 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.176511 1050333 pod_ready.go:94] pod "kube-apiserver-no-preload-041029" is "Ready"
	I1120 22:29:48.176538 1050333 pod_ready.go:86] duration metric: took 4.623896ms for pod "kube-apiserver-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.179295 1050333 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.359242 1050333 pod_ready.go:94] pod "kube-controller-manager-no-preload-041029" is "Ready"
	I1120 22:29:48.359271 1050333 pod_ready.go:86] duration metric: took 179.940486ms for pod "kube-controller-manager-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.559334 1050333 pod_ready.go:83] waiting for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:48.960142 1050333 pod_ready.go:94] pod "kube-proxy-n78zb" is "Ready"
	I1120 22:29:48.960173 1050333 pod_ready.go:86] duration metric: took 400.801924ms for pod "kube-proxy-n78zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:49.159486 1050333 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:49.559637 1050333 pod_ready.go:94] pod "kube-scheduler-no-preload-041029" is "Ready"
	I1120 22:29:49.559665 1050333 pod_ready.go:86] duration metric: took 400.150953ms for pod "kube-scheduler-no-preload-041029" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 22:29:49.559678 1050333 pod_ready.go:40] duration metric: took 42.409820283s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 22:29:49.635049 1050333 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 22:29:49.638160 1050333 out.go:179] * Done! kubectl is now configured to use "no-preload-041029" cluster and "default" namespace by default
	W1120 22:29:44.800627 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:47.299327 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:49.300195 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:51.300393 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:53.799360 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:56.307594 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:29:58.800406 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:30:00.809509 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	W1120 22:30:03.301327 1049903 node_ready.go:57] node "auto-640880" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 20 22:29:32 no-preload-041029 crio[658]: time="2025-11-20T22:29:32.820415365Z" level=info msg="Removed container 55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz/dashboard-metrics-scraper" id=81c21f73-1fc3-4b1e-871f-05d8fc66b187 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 22:29:36 no-preload-041029 conmon[1156]: conmon a6f77ff04e1d67a44bd5 <ninfo>: container 1178 exited with status 1
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.811908099Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fe8307dd-4426-4f73-aef1-b7b8af17ea4b name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.813233261Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7072cd69-1f1f-44bd-8cd6-a8077dcbb993 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.814502897Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7e17d36f-0137-4878-8ae4-8241aed161cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.814724513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.821433555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.821759934Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b6533367f3be66a5aa81529eeebca6b162f66898fda1b5a9ec741152a5602d22/merged/etc/passwd: no such file or directory"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.821865905Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b6533367f3be66a5aa81529eeebca6b162f66898fda1b5a9ec741152a5602d22/merged/etc/group: no such file or directory"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.822287358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.844151122Z" level=info msg="Created container 41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf: kube-system/storage-provisioner/storage-provisioner" id=7e17d36f-0137-4878-8ae4-8241aed161cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.845219985Z" level=info msg="Starting container: 41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf" id=dfeb82c2-c69a-44eb-a1af-15492b54d217 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 22:29:36 no-preload-041029 crio[658]: time="2025-11-20T22:29:36.846930736Z" level=info msg="Started container" PID=1654 containerID=41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf description=kube-system/storage-provisioner/storage-provisioner id=dfeb82c2-c69a-44eb-a1af-15492b54d217 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3750cec50ddd4ceca860591336bab161957c7d5145281763c08ed2394540bf71
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.378629641Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.384125696Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.384162997Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.384185495Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.387700921Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.387740019Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.387764553Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.390866221Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.390901906Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.390927063Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.394116716Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 22:29:46 no-preload-041029 crio[658]: time="2025-11-20T22:29:46.394154903Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	41ba82d6da898       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           30 seconds ago       Running             storage-provisioner         2                   3750cec50ddd4       storage-provisioner                          kube-system
	203bde87ce2b0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           34 seconds ago       Exited              dashboard-metrics-scraper   2                   56a820dde62d1       dashboard-metrics-scraper-6ffb444bf9-gtbnz   kubernetes-dashboard
	d7207e0f6514d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   8c3009aa039e8       kubernetes-dashboard-855c9754f9-5fl85        kubernetes-dashboard
	440ee2ef9222e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   6c4a89b0ad3bd       busybox                                      default
	47eef4f0b9636       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   e384c68fadaf6       coredns-66bc5c9577-6dbgj                     kube-system
	a6f77ff04e1d6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           About a minute ago   Exited              storage-provisioner         1                   3750cec50ddd4       storage-provisioner                          kube-system
	e3ff002bcd2e2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   47c7f265d737f       kube-proxy-n78zb                             kube-system
	da42598cf8490       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   85963bf79f54d       kindnet-2fs8p                                kube-system
	e42bdea342f42       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   b5451396c4751       etcd-no-preload-041029                       kube-system
	0962480e895b0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   f40364b846a24       kube-controller-manager-no-preload-041029    kube-system
	f023b4b884cd5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5eb8d2a64ea46       kube-scheduler-no-preload-041029             kube-system
	1ed9b7cf8d081       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   4afb9f6835c85       kube-apiserver-no-preload-041029             kube-system
	
	
	==> coredns [47eef4f0b9636eb9f49ce7cfceedd7b832747ca4656d77970e8755154fc7ac35] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44164 - 12134 "HINFO IN 6698151684193989111.614327327706340683. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.045562792s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-041029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-041029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-041029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T22_27_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 22:27:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-041029
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 22:29:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 22:29:35 +0000   Thu, 20 Nov 2025 22:27:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 22:29:35 +0000   Thu, 20 Nov 2025 22:27:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 22:29:35 +0000   Thu, 20 Nov 2025 22:27:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 22:29:35 +0000   Thu, 20 Nov 2025 22:28:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-041029
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                c8a9cfc0-4549-4e9b-8f8a-328559b1944e
	  Boot ID:                    bb387883-2f05-498f-a5ab-f8e487e138de
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-66bc5c9577-6dbgj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m5s
	  kube-system                 etcd-no-preload-041029                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m11s
	  kube-system                 kindnet-2fs8p                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m5s
	  kube-system                 kube-apiserver-no-preload-041029              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-no-preload-041029     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-proxy-n78zb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-scheduler-no-preload-041029              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gtbnz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5fl85         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m3s                   kube-proxy       
	  Normal   Starting                 60s                    kube-proxy       
	  Warning  CgroupV1                 2m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node no-preload-041029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node no-preload-041029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s (x8 over 2m22s)  kubelet          Node no-preload-041029 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s                  kubelet          Node no-preload-041029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s                  kubelet          Node no-preload-041029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s                  kubelet          Node no-preload-041029 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m6s                   node-controller  Node no-preload-041029 event: Registered Node no-preload-041029 in Controller
	  Normal   NodeReady                109s                   kubelet          Node no-preload-041029 status is now: NodeReady
	  Normal   Starting                 72s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)      kubelet          Node no-preload-041029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)      kubelet          Node no-preload-041029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)      kubelet          Node no-preload-041029 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node no-preload-041029 event: Registered Node no-preload-041029 in Controller
	
	
	==> dmesg <==
	[ +43.992377] overlayfs: idmapped layers are currently not supported
	[Nov20 22:07] overlayfs: idmapped layers are currently not supported
	[ +38.869641] overlayfs: idmapped layers are currently not supported
	[Nov20 22:08] overlayfs: idmapped layers are currently not supported
	[Nov20 22:10] overlayfs: idmapped layers are currently not supported
	[Nov20 22:11] overlayfs: idmapped layers are currently not supported
	[Nov20 22:13] overlayfs: idmapped layers are currently not supported
	[Nov20 22:14] overlayfs: idmapped layers are currently not supported
	[Nov20 22:15] overlayfs: idmapped layers are currently not supported
	[Nov20 22:17] overlayfs: idmapped layers are currently not supported
	[Nov20 22:19] overlayfs: idmapped layers are currently not supported
	[Nov20 22:20] overlayfs: idmapped layers are currently not supported
	[ +19.123936] overlayfs: idmapped layers are currently not supported
	[Nov20 22:21] overlayfs: idmapped layers are currently not supported
	[ +38.615546] overlayfs: idmapped layers are currently not supported
	[Nov20 22:22] overlayfs: idmapped layers are currently not supported
	[Nov20 22:24] overlayfs: idmapped layers are currently not supported
	[ +35.164985] overlayfs: idmapped layers are currently not supported
	[Nov20 22:25] overlayfs: idmapped layers are currently not supported
	[Nov20 22:26] overlayfs: idmapped layers are currently not supported
	[Nov20 22:27] overlayfs: idmapped layers are currently not supported
	[ +44.355242] overlayfs: idmapped layers are currently not supported
	[Nov20 22:28] overlayfs: idmapped layers are currently not supported
	[ +28.528461] overlayfs: idmapped layers are currently not supported
	[Nov20 22:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e42bdea342f42392b071351be610744a76403aa1460991517dc30c6622b12fab] <==
	{"level":"warn","ts":"2025-11-20T22:29:00.859791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:00.893724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:00.954952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.009750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.062763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.095896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.136000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.200887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.264788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.297889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.339423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.395375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.450140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.514908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.556704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.560591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.588060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.627967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.703081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.720593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.785594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.858875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:01.936213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:02.059485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T22:29:02.229292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:30:07 up  5:12,  0 user,  load average: 4.12, 4.14, 3.16
	Linux no-preload-041029 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [da42598cf8490287fd97dafd07a73f5eaa9f8fa0e2bcbe2f23c4598aaec33417] <==
	I1120 22:29:06.161457       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 22:29:06.203404       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 22:29:06.203554       1 main.go:148] setting mtu 1500 for CNI 
	I1120 22:29:06.203566       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 22:29:06.203581       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T22:29:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 22:29:06.404021       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 22:29:06.404053       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 22:29:06.404061       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 22:29:06.404160       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 22:29:36.405858       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 22:29:36.405862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 22:29:36.406006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 22:29:36.406056       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1120 22:29:37.604203       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 22:29:37.604311       1 metrics.go:72] Registering metrics
	I1120 22:29:37.605183       1 controller.go:711] "Syncing nftables rules"
	I1120 22:29:46.378324       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:29:46.378363       1 main.go:301] handling current node
	I1120 22:29:56.379782       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:29:56.379816       1 main.go:301] handling current node
	I1120 22:30:06.382609       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 22:30:06.383063       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ed9b7cf8d08106500bd207cf6aeb94655fa86b8f7e5a5e12ea8481115f296b6] <==
	I1120 22:29:04.426577       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 22:29:04.426825       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 22:29:04.426894       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 22:29:04.426925       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 22:29:04.467242       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 22:29:04.467278       1 policy_source.go:240] refreshing policies
	I1120 22:29:04.468114       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 22:29:04.468153       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 22:29:04.468176       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 22:29:04.468183       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 22:29:04.469702       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 22:29:04.513841       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 22:29:04.537296       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 22:29:04.637911       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 22:29:05.544120       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 22:29:05.775914       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 22:29:06.219934       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 22:29:06.393176       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 22:29:06.561216       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 22:29:06.908974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.191.26"}
	I1120 22:29:06.933457       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.217.170"}
	W1120 22:29:06.956434       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 22:29:06.957855       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 22:29:06.964221       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 22:29:09.319537       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0962480e895b00f5e5f7566371faa096c72149db953c264531067463575412d0] <==
	I1120 22:29:09.225103       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 22:29:09.235419       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 22:29:09.244724       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 22:29:09.246005       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 22:29:09.249217       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 22:29:09.249929       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 22:29:09.250425       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-041029"
	I1120 22:29:09.250501       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 22:29:09.249234       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 22:29:09.249645       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 22:29:09.249667       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 22:29:09.255061       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 22:29:09.256094       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 22:29:09.265452       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:29:09.268967       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 22:29:09.280715       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 22:29:09.284117       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 22:29:09.293141       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 22:29:09.293244       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 22:29:09.293295       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 22:29:09.293410       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 22:29:09.294785       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 22:29:09.295171       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 22:29:09.295248       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 22:29:09.295290       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [e3ff002bcd2e24647b6415e521297e2309e2f39cdf9a3f07226779379f304671] <==
	I1120 22:29:06.621001       1 server_linux.go:53] "Using iptables proxy"
	I1120 22:29:06.950857       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 22:29:07.076910       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 22:29:07.077013       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 22:29:07.077125       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 22:29:07.192198       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 22:29:07.192359       1 server_linux.go:132] "Using iptables Proxier"
	I1120 22:29:07.200516       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 22:29:07.200960       1 server.go:527] "Version info" version="v1.34.1"
	I1120 22:29:07.201163       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:29:07.202379       1 config.go:200] "Starting service config controller"
	I1120 22:29:07.202433       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 22:29:07.202478       1 config.go:106] "Starting endpoint slice config controller"
	I1120 22:29:07.202505       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 22:29:07.202543       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 22:29:07.202570       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 22:29:07.203347       1 config.go:309] "Starting node config controller"
	I1120 22:29:07.206294       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 22:29:07.206353       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 22:29:07.302913       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 22:29:07.303067       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 22:29:07.303096       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f023b4b884cd598958f1afa19540045fe5a0c2be9cb914f11b375b8788914863] <==
	I1120 22:29:00.948578       1 serving.go:386] Generated self-signed cert in-memory
	I1120 22:29:05.022617       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 22:29:05.022651       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 22:29:05.053296       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 22:29:05.053371       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 22:29:05.053391       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 22:29:05.053416       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 22:29:05.085760       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:29:05.085787       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:29:05.085827       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:29:05.085833       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 22:29:05.157848       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 22:29:05.186125       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 22:29:05.186856       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 20 22:29:10 no-preload-041029 kubelet[779]: I1120 22:29:10.001055     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22xvl\" (UniqueName: \"kubernetes.io/projected/df232e57-08f8-4065-abe1-33961949ca0f-kube-api-access-22xvl\") pod \"kubernetes-dashboard-855c9754f9-5fl85\" (UID: \"df232e57-08f8-4065-abe1-33961949ca0f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fl85"
	Nov 20 22:29:10 no-preload-041029 kubelet[779]: I1120 22:29:10.001118     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/13af84c9-f7c8-43fb-bff4-db99817b7d82-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gtbnz\" (UID: \"13af84c9-f7c8-43fb-bff4-db99817b7d82\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz"
	Nov 20 22:29:10 no-preload-041029 kubelet[779]: W1120 22:29:10.232321     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/crio-56a820dde62d119d62c0790b01bfca5207eef554578957461c1fcf02235b04de WatchSource:0}: Error finding container 56a820dde62d119d62c0790b01bfca5207eef554578957461c1fcf02235b04de: Status 404 returned error can't find the container with id 56a820dde62d119d62c0790b01bfca5207eef554578957461c1fcf02235b04de
	Nov 20 22:29:10 no-preload-041029 kubelet[779]: W1120 22:29:10.263822     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8049b6a31f79328ff7701d6aca4e65dd83d639b75ef35e7f6de560af38e0ad71/crio-8c3009aa039e8fb745234940232e11ebd04578fb98a7020c9b8da858884cfbaf WatchSource:0}: Error finding container 8c3009aa039e8fb745234940232e11ebd04578fb98a7020c9b8da858884cfbaf: Status 404 returned error can't find the container with id 8c3009aa039e8fb745234940232e11ebd04578fb98a7020c9b8da858884cfbaf
	Nov 20 22:29:17 no-preload-041029 kubelet[779]: I1120 22:29:17.738481     779 scope.go:117] "RemoveContainer" containerID="3d94c28c91f1e3c18d9b0fed99b46e64f7c5c7ceb52b979c2d69f870a4afadab"
	Nov 20 22:29:18 no-preload-041029 kubelet[779]: I1120 22:29:18.744777     779 scope.go:117] "RemoveContainer" containerID="3d94c28c91f1e3c18d9b0fed99b46e64f7c5c7ceb52b979c2d69f870a4afadab"
	Nov 20 22:29:18 no-preload-041029 kubelet[779]: I1120 22:29:18.745047     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:18 no-preload-041029 kubelet[779]: E1120 22:29:18.745187     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:19 no-preload-041029 kubelet[779]: I1120 22:29:19.752830     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:19 no-preload-041029 kubelet[779]: E1120 22:29:19.752976     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:20 no-preload-041029 kubelet[779]: I1120 22:29:20.754648     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:20 no-preload-041029 kubelet[779]: E1120 22:29:20.754813     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: I1120 22:29:32.555722     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: I1120 22:29:32.799578     779 scope.go:117] "RemoveContainer" containerID="55706790f2768535ff77f89660096d424a2e07db5e7f834c761c753de8f36c6f"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: I1120 22:29:32.799927     779 scope.go:117] "RemoveContainer" containerID="203bde87ce2b03a82b4c50019e0edb462ab301d6858878f3f25a66a9194a2b97"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: E1120 22:29:32.800077     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:32 no-preload-041029 kubelet[779]: I1120 22:29:32.822835     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fl85" podStartSLOduration=9.914021582 podStartE2EDuration="23.822805172s" podCreationTimestamp="2025-11-20 22:29:09 +0000 UTC" firstStartedPulling="2025-11-20 22:29:10.267542696 +0000 UTC m=+15.003058814" lastFinishedPulling="2025-11-20 22:29:24.176326278 +0000 UTC m=+28.911842404" observedRunningTime="2025-11-20 22:29:24.781714535 +0000 UTC m=+29.517230653" watchObservedRunningTime="2025-11-20 22:29:32.822805172 +0000 UTC m=+37.558321289"
	Nov 20 22:29:36 no-preload-041029 kubelet[779]: I1120 22:29:36.811278     779 scope.go:117] "RemoveContainer" containerID="a6f77ff04e1d67a44bd587841792b8215abd9c076d0500109bc25fc0c3307090"
	Nov 20 22:29:40 no-preload-041029 kubelet[779]: I1120 22:29:40.191365     779 scope.go:117] "RemoveContainer" containerID="203bde87ce2b03a82b4c50019e0edb462ab301d6858878f3f25a66a9194a2b97"
	Nov 20 22:29:40 no-preload-041029 kubelet[779]: E1120 22:29:40.191601     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:29:52 no-preload-041029 kubelet[779]: I1120 22:29:52.555706     779 scope.go:117] "RemoveContainer" containerID="203bde87ce2b03a82b4c50019e0edb462ab301d6858878f3f25a66a9194a2b97"
	Nov 20 22:29:52 no-preload-041029 kubelet[779]: E1120 22:29:52.555898     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gtbnz_kubernetes-dashboard(13af84c9-f7c8-43fb-bff4-db99817b7d82)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gtbnz" podUID="13af84c9-f7c8-43fb-bff4-db99817b7d82"
	Nov 20 22:30:02 no-preload-041029 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 22:30:02 no-preload-041029 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 22:30:02 no-preload-041029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d7207e0f6514d7dd0cc35630dc0c8be98fda4a396f91d7842768b91e9cf4adf1] <==
	2025/11/20 22:29:24 Starting overwatch
	2025/11/20 22:29:24 Using namespace: kubernetes-dashboard
	2025/11/20 22:29:24 Using in-cluster config to connect to apiserver
	2025/11/20 22:29:24 Using secret token for csrf signing
	2025/11/20 22:29:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 22:29:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 22:29:24 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 22:29:24 Generating JWE encryption key
	2025/11/20 22:29:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 22:29:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 22:29:24 Initializing JWE encryption key from synchronized object
	2025/11/20 22:29:24 Creating in-cluster Sidecar client
	2025/11/20 22:29:24 Serving insecurely on HTTP port: 9090
	2025/11/20 22:29:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 22:29:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [41ba82d6da898187aa191047bdafd7455c14554b508e92e24f58961c59481ccf] <==
	W1120 22:29:36.875887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:40.330770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:44.590918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:48.189426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:51.244041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:54.266715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:54.271641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:29:54.272038       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 22:29:54.272278       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-041029_699ac762-ec9e-4c21-8edb-2e4b2d8bdce8!
	I1120 22:29:54.273320       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"415b729b-7223-449b-a0a8-421bccd3a052", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-041029_699ac762-ec9e-4c21-8edb-2e4b2d8bdce8 became leader
	W1120 22:29:54.279208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:54.284310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 22:29:54.372940       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-041029_699ac762-ec9e-4c21-8edb-2e4b2d8bdce8!
	W1120 22:29:56.287829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:56.292469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:58.296364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:29:58.303602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:00.309443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:00.327133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:02.330044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:02.334766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:04.345429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:04.355306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:06.361941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 22:30:06.374481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a6f77ff04e1d67a44bd587841792b8215abd9c076d0500109bc25fc0c3307090] <==
	I1120 22:29:06.605041       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 22:29:36.607246       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-041029 -n no-preload-041029
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-041029 -n no-preload-041029: exit status 2 (391.400301ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-041029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.79s)
E1120 22:35:49.783919  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:54.491183  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (256/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 37.74
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 39.31
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 167.42
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.8
48 TestAddons/StoppedEnableDisable 12.36
49 TestCertOptions 37.93
50 TestCertExpiration 251.17
52 TestForceSystemdFlag 42.52
53 TestForceSystemdEnv 43.64
58 TestErrorSpam/setup 34.57
59 TestErrorSpam/start 0.72
60 TestErrorSpam/status 1.39
61 TestErrorSpam/pause 6.39
62 TestErrorSpam/unpause 5.73
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.47
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 39.75
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.71
75 TestFunctional/serial/CacheCmd/cache/add_local 1.11
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 37.06
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.51
86 TestFunctional/serial/LogsFileCmd 1.53
87 TestFunctional/serial/InvalidService 4.15
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 9.69
91 TestFunctional/parallel/DryRun 0.49
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.07
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 24.63
101 TestFunctional/parallel/SSHCmd 0.56
102 TestFunctional/parallel/CpCmd 1.8
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.21
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 2.21
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.82
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.9
121 TestFunctional/parallel/ImageCommands/Setup 0.69
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/List 0.51
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
150 TestFunctional/parallel/ProfileCmd/profile_list 0.41
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
152 TestFunctional/parallel/MountCmd/any-port 7.74
153 TestFunctional/parallel/MountCmd/specific-port 2.1
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.34
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 212.59
163 TestMultiControlPlane/serial/DeployApp 8.36
164 TestMultiControlPlane/serial/PingHostFromPods 1.61
165 TestMultiControlPlane/serial/AddWorkerNode 60.96
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 20.26
169 TestMultiControlPlane/serial/StopSecondaryNode 12.79
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 33.19
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.26
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.61
176 TestMultiControlPlane/serial/StopCluster 36.36
179 TestMultiControlPlane/serial/AddSecondaryNode 80.2
185 TestJSONOutput/start/Command 82.26
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.9
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 74.57
211 TestKicCustomNetwork/use_default_bridge_network 31.91
212 TestKicExistingNetwork 32.26
213 TestKicCustomSubnet 37.15
214 TestKicStaticIP 39.08
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 81.33
219 TestMountStart/serial/StartWithMountFirst 9.42
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 6.37
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.74
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 8.36
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 142.61
231 TestMultiNode/serial/DeployApp2Nodes 6.82
232 TestMultiNode/serial/PingHostFrom2Pods 0.94
233 TestMultiNode/serial/AddNode 57.96
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.46
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.16
239 TestMultiNode/serial/RestartKeepsNodes 73.44
240 TestMultiNode/serial/DeleteNode 5.7
241 TestMultiNode/serial/StopMultiNode 24.01
242 TestMultiNode/serial/RestartMultiNode 48.3
243 TestMultiNode/serial/ValidateNameConflict 35.34
248 TestPreload 158.82
250 TestScheduledStopUnix 111.56
253 TestInsufficientStorage 13.33
254 TestRunningBinaryUpgrade 62.95
256 TestKubernetesUpgrade 371.41
257 TestMissingContainerUpgrade 119.17
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 41.78
261 TestNoKubernetes/serial/StartWithStopK8s 8.48
262 TestNoKubernetes/serial/Start 11.35
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
265 TestNoKubernetes/serial/ProfileList 1.31
266 TestNoKubernetes/serial/Stop 1.38
267 TestNoKubernetes/serial/StartNoArgs 7.92
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
269 TestStoppedBinaryUpgrade/Setup 7.98
270 TestStoppedBinaryUpgrade/Upgrade 57.87
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.28
280 TestPause/serial/Start 81.78
281 TestPause/serial/SecondStartNoReconfiguration 28.79
290 TestNetworkPlugins/group/false 3.75
295 TestStartStop/group/old-k8s-version/serial/FirstStart 63.51
296 TestStartStop/group/old-k8s-version/serial/DeployApp 8.41
298 TestStartStop/group/old-k8s-version/serial/Stop 12
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/old-k8s-version/serial/SecondStart 47.42
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.69
308 TestStartStop/group/embed-certs/serial/FirstStart 86.78
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.93
314 TestStartStop/group/embed-certs/serial/DeployApp 8.49
316 TestStartStop/group/embed-certs/serial/Stop 12.83
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
318 TestStartStop/group/embed-certs/serial/SecondStart 50.56
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
321 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
324 TestStartStop/group/no-preload/serial/FirstStart 76.91
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.42
330 TestStartStop/group/newest-cni/serial/FirstStart 41.47
331 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/Stop 1.4
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
335 TestStartStop/group/newest-cni/serial/SecondStart 15.96
336 TestStartStop/group/no-preload/serial/DeployApp 8.33
338 TestStartStop/group/no-preload/serial/Stop 12.7
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
343 TestNetworkPlugins/group/auto/Start 88.21
344 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
345 TestStartStop/group/no-preload/serial/SecondStart 63.28
346 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
350 TestNetworkPlugins/group/kindnet/Start 83.07
351 TestNetworkPlugins/group/auto/KubeletFlags 0.38
352 TestNetworkPlugins/group/auto/NetCatPod 11.38
353 TestNetworkPlugins/group/auto/DNS 0.17
354 TestNetworkPlugins/group/auto/Localhost 0.18
355 TestNetworkPlugins/group/auto/HairPin 0.17
356 TestNetworkPlugins/group/calico/Start 77.43
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
359 TestNetworkPlugins/group/kindnet/NetCatPod 12.34
360 TestNetworkPlugins/group/kindnet/DNS 0.21
361 TestNetworkPlugins/group/kindnet/Localhost 0.15
362 TestNetworkPlugins/group/kindnet/HairPin 0.13
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.39
365 TestNetworkPlugins/group/calico/NetCatPod 11.37
366 TestNetworkPlugins/group/custom-flannel/Start 70.27
367 TestNetworkPlugins/group/calico/DNS 0.33
368 TestNetworkPlugins/group/calico/Localhost 0.26
369 TestNetworkPlugins/group/calico/HairPin 0.16
370 TestNetworkPlugins/group/enable-default-cni/Start 73.03
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.36
373 TestNetworkPlugins/group/custom-flannel/DNS 0.18
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
376 TestNetworkPlugins/group/flannel/Start 64.66
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.51
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
382 TestNetworkPlugins/group/bridge/Start 77.49
383 TestNetworkPlugins/group/flannel/ControllerPod 6
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
385 TestNetworkPlugins/group/flannel/NetCatPod 12.41
386 TestNetworkPlugins/group/flannel/DNS 0.17
387 TestNetworkPlugins/group/flannel/Localhost 0.13
388 TestNetworkPlugins/group/flannel/HairPin 0.14
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
390 TestNetworkPlugins/group/bridge/NetCatPod 10.26
391 TestNetworkPlugins/group/bridge/DNS 0.15
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (37.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-775498 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-775498 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (37.735033215s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (37.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1120 21:10:08.432504  836852 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1120 21:10:08.432587  836852 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-775498
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-775498: exit status 85 (81.410093ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-775498 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-775498 │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:09:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:09:30.750657  836857 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:09:30.750841  836857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:09:30.750877  836857 out.go:374] Setting ErrFile to fd 2...
	I1120 21:09:30.750898  836857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:09:30.751195  836857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	W1120 21:09:30.751368  836857 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21923-834992/.minikube/config/config.json: open /home/jenkins/minikube-integration/21923-834992/.minikube/config/config.json: no such file or directory
	I1120 21:09:30.751822  836857 out.go:368] Setting JSON to true
	I1120 21:09:30.752695  836857 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13896,"bootTime":1763659075,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:09:30.752791  836857 start.go:143] virtualization:  
	I1120 21:09:30.756815  836857 out.go:99] [download-only-775498] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1120 21:09:30.757015  836857 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball: no such file or directory
	I1120 21:09:30.757156  836857 notify.go:221] Checking for updates...
	I1120 21:09:30.760543  836857 out.go:171] MINIKUBE_LOCATION=21923
	I1120 21:09:30.763653  836857 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:09:30.766551  836857 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:09:30.769459  836857 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:09:30.772526  836857 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1120 21:09:30.778243  836857 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1120 21:09:30.778527  836857 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:09:30.799174  836857 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:09:30.799303  836857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:09:30.864717  836857 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-20 21:09:30.855599146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:09:30.864834  836857 docker.go:319] overlay module found
	I1120 21:09:30.867928  836857 out.go:99] Using the docker driver based on user configuration
	I1120 21:09:30.867971  836857 start.go:309] selected driver: docker
	I1120 21:09:30.867979  836857 start.go:930] validating driver "docker" against <nil>
	I1120 21:09:30.868091  836857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:09:30.926599  836857 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-20 21:09:30.917026087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:09:30.926764  836857 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:09:30.927075  836857 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1120 21:09:30.927230  836857 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 21:09:30.930342  836857 out.go:171] Using Docker driver with root privileges
	I1120 21:09:30.933470  836857 cni.go:84] Creating CNI manager for ""
	I1120 21:09:30.933542  836857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:09:30.933556  836857 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:09:30.933652  836857 start.go:353] cluster config:
	{Name:download-only-775498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-775498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:09:30.936645  836857 out.go:99] Starting "download-only-775498" primary control-plane node in "download-only-775498" cluster
	I1120 21:09:30.936669  836857 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:09:30.939577  836857 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:09:30.939648  836857 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 21:09:30.939721  836857 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:09:30.961350  836857 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 21:09:30.962239  836857 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 21:09:30.962367  836857 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 21:09:31.001732  836857 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1120 21:09:31.001763  836857 cache.go:65] Caching tarball of preloaded images
	I1120 21:09:31.002592  836857 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 21:09:31.006363  836857 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1120 21:09:31.006403  836857 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1120 21:09:31.117070  836857 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1120 21:09:31.117213  836857 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1120 21:09:35.843353  836857 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	
	
	* The control-plane node download-only-775498 host does not exist
	  To start a cluster, run: "minikube start -p download-only-775498"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-775498
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (39.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-395142 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-395142 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (39.310160214s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (39.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1120 21:10:48.179938  836852 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1120 21:10:48.179973  836852 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-395142
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-395142: exit status 85 (85.980619ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-775498 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-775498 │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p download-only-775498                                                                                                                                                   │ download-only-775498 │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -o=json --download-only -p download-only-395142 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-395142 │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:10:08
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:10:08.915170  837062 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:10:08.915288  837062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:08.915301  837062 out.go:374] Setting ErrFile to fd 2...
	I1120 21:10:08.915306  837062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:08.915560  837062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:10:08.915975  837062 out.go:368] Setting JSON to true
	I1120 21:10:08.916801  837062 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13934,"bootTime":1763659075,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:10:08.916894  837062 start.go:143] virtualization:  
	I1120 21:10:08.920218  837062 out.go:99] [download-only-395142] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:10:08.920540  837062 notify.go:221] Checking for updates...
	I1120 21:10:08.924126  837062 out.go:171] MINIKUBE_LOCATION=21923
	I1120 21:10:08.927153  837062 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:10:08.930042  837062 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:10:08.933038  837062 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:10:08.935990  837062 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1120 21:10:08.941797  837062 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1120 21:10:08.942111  837062 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:10:08.974550  837062 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:10:08.974675  837062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:09.040640  837062 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-20 21:10:09.030949122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:09.040769  837062 docker.go:319] overlay module found
	I1120 21:10:09.044103  837062 out.go:99] Using the docker driver based on user configuration
	I1120 21:10:09.044146  837062 start.go:309] selected driver: docker
	I1120 21:10:09.044154  837062 start.go:930] validating driver "docker" against <nil>
	I1120 21:10:09.044259  837062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:09.100057  837062 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-20 21:10:09.090317043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:09.100234  837062 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:10:09.100521  837062 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1120 21:10:09.100707  837062 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 21:10:09.103866  837062 out.go:171] Using Docker driver with root privileges
	I1120 21:10:09.106669  837062 cni.go:84] Creating CNI manager for ""
	I1120 21:10:09.106739  837062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:10:09.106753  837062 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:10:09.106833  837062 start.go:353] cluster config:
	{Name:download-only-395142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-395142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:10:09.110026  837062 out.go:99] Starting "download-only-395142" primary control-plane node in "download-only-395142" cluster
	I1120 21:10:09.110060  837062 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:10:09.113003  837062 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:10:09.113100  837062 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:10:09.113152  837062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:10:09.129770  837062 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 21:10:09.129894  837062 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 21:10:09.129919  837062 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1120 21:10:09.129928  837062 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1120 21:10:09.129937  837062 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1120 21:10:09.190476  837062 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1120 21:10:09.190516  837062 cache.go:65] Caching tarball of preloaded images
	I1120 21:10:09.191265  837062 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:10:09.194289  837062 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1120 21:10:09.194323  837062 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1120 21:10:09.291134  837062 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1120 21:10:09.291184  837062 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21923-834992/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-395142 host does not exist
	  To start a cluster, run: "minikube start -p download-only-395142"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-395142
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1120 21:10:49.326925  836852 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-490692 --alsologtostderr --binary-mirror http://127.0.0.1:37155 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-490692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-490692
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-828342
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-828342: exit status 85 (78.288825ms)

                                                
                                                
-- stdout --
	* Profile "addons-828342" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-828342"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-828342
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-828342: exit status 85 (73.375123ms)

                                                
                                                
-- stdout --
	* Profile "addons-828342" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-828342"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (167.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-828342 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-828342 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m47.423128973s)
--- PASS: TestAddons/Setup (167.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-828342 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-828342 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.8s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-828342 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-828342 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [67741c67-fb46-416f-b67a-6aa82f235803] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [67741c67-fb46-416f-b67a-6aa82f235803] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003328895s
addons_test.go:694: (dbg) Run:  kubectl --context addons-828342 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-828342 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-828342 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-828342 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-828342
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-828342: (12.096056378s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-828342
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-828342
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-828342
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (37.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-961311 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-961311 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.040605277s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-961311 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-961311 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-961311 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-961311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-961311
E1120 22:21:15.819690  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-961311: (2.107440266s)
--- PASS: TestCertOptions (37.93s)

                                                
                                    
x
+
TestCertExpiration (251.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-420078 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-420078 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.990545142s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-420078 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-420078 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (32.044722428s)
helpers_test.go:175: Cleaning up "cert-expiration-420078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-420078
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-420078: (3.134341998s)
--- PASS: TestCertExpiration (251.17s)

                                                
                                    
x
+
TestForceSystemdFlag (42.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-775688 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-775688 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.541153019s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-775688 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-775688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-775688
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-775688: (2.587732552s)
--- PASS: TestForceSystemdFlag (42.52s)

                                                
                                    
x
+
TestForceSystemdEnv (43.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-833370 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-833370 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.924241761s)
helpers_test.go:175: Cleaning up "force-systemd-env-833370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-833370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-833370: (2.715637719s)
--- PASS: TestForceSystemdEnv (43.64s)

                                                
                                    
x
+
TestErrorSpam/setup (34.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-285614 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-285614 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-285614 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-285614 --driver=docker  --container-runtime=crio: (34.574764774s)
--- PASS: TestErrorSpam/setup (34.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 status
--- PASS: TestErrorSpam/status (1.39s)

                                                
                                    
x
+
TestErrorSpam/pause (6.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 pause: exit status 80 (1.942555118s)

                                                
                                                
-- stdout --
	* Pausing node nospam-285614 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:18:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 pause: exit status 80 (2.070050829s)

                                                
                                                
-- stdout --
	* Pausing node nospam-285614 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:18:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 pause: exit status 80 (2.376155398s)

                                                
                                                
-- stdout --
	* Pausing node nospam-285614 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:18:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 unpause: exit status 80 (2.183174612s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-285614 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:18:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 unpause: exit status 80 (1.954625571s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-285614 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:18:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 unpause: exit status 80 (1.590630265s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-285614 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:18:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 stop: (1.306376503s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-285614 --log_dir /tmp/nospam-285614 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21923-834992/.minikube/files/etc/test/nested/copy/836852/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-038709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1120 21:18:38.582581  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:38.588981  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:38.600326  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:38.621725  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:38.663138  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:38.744551  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:38.906038  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:39.227761  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:39.869827  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:41.151219  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:43.713551  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:48.835548  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:59.077232  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:19.559096  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-038709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.466734487s)
--- PASS: TestFunctional/serial/StartWithProxy (79.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.75s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1120 21:19:38.461932  836852 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-038709 --alsologtostderr -v=8
E1120 21:20:00.521394  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-038709 --alsologtostderr -v=8: (39.75011697s)
functional_test.go:678: soft start took 39.75061447s for "functional-038709" cluster.
I1120 21:20:18.212330  836852 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (39.75s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-038709 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-038709 cache add registry.k8s.io/pause:3.1: (1.252497857s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-038709 cache add registry.k8s.io/pause:3.3: (1.302124496s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-038709 cache add registry.k8s.io/pause:latest: (1.154855111s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-038709 /tmp/TestFunctionalserialCacheCmdcacheadd_local365074849/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 cache add minikube-local-cache-test:functional-038709
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 cache delete minikube-local-cache-test:functional-038709
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-038709
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (307.165216ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 kubectl -- --context functional-038709 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-038709 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-038709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-038709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.063482715s)
functional_test.go:776: restart took 37.063584608s for "functional-038709" cluster.
I1120 21:21:02.912743  836852 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-038709 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-038709 logs: (1.509541832s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 logs --file /tmp/TestFunctionalserialLogsFileCmd1404626704/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-038709 logs --file /tmp/TestFunctionalserialLogsFileCmd1404626704/001/logs.txt: (1.529871912s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-038709 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-038709
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-038709: exit status 115 (386.449265ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30110 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-038709 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 config get cpus: exit status 14 (71.980049ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 config get cpus: exit status 14 (71.801315ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-038709 --alsologtostderr -v=1]
2025/11/20 21:31:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-038709 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 864648: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-038709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-038709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (203.81671ms)

                                                
                                                
-- stdout --
	* [functional-038709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:31:34.957777  864354 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:31:34.957986  864354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:31:34.958016  864354 out.go:374] Setting ErrFile to fd 2...
	I1120 21:31:34.958038  864354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:31:34.958452  864354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:31:34.958931  864354 out.go:368] Setting JSON to false
	I1120 21:31:34.959988  864354 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15220,"bootTime":1763659075,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:31:34.960127  864354 start.go:143] virtualization:  
	I1120 21:31:34.963806  864354 out.go:179] * [functional-038709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:31:34.967720  864354 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:31:34.967814  864354 notify.go:221] Checking for updates...
	I1120 21:31:34.973747  864354 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:31:34.977052  864354 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:31:34.980262  864354 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:31:34.983320  864354 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:31:34.986069  864354 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:31:34.989461  864354 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:31:34.990014  864354 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:31:35.023331  864354 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:31:35.023479  864354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:31:35.092399  864354 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:31:35.080950825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:31:35.092512  864354 docker.go:319] overlay module found
	I1120 21:31:35.095645  864354 out.go:179] * Using the docker driver based on existing profile
	I1120 21:31:35.098526  864354 start.go:309] selected driver: docker
	I1120 21:31:35.098545  864354 start.go:930] validating driver "docker" against &{Name:functional-038709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-038709 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:31:35.098642  864354 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:31:35.102288  864354 out.go:203] 
	W1120 21:31:35.105150  864354 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1120 21:31:35.108043  864354 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-038709 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-038709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-038709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (203.198717ms)

                                                
                                                
-- stdout --
	* [functional-038709] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:31:35.460214  864472 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:31:35.460394  864472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:31:35.460405  864472 out.go:374] Setting ErrFile to fd 2...
	I1120 21:31:35.460411  864472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:31:35.460775  864472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:31:35.461158  864472 out.go:368] Setting JSON to false
	I1120 21:31:35.462037  864472 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15221,"bootTime":1763659075,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 21:31:35.462107  864472 start.go:143] virtualization:  
	I1120 21:31:35.465133  864472 out.go:179] * [functional-038709] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1120 21:31:35.468077  864472 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:31:35.468208  864472 notify.go:221] Checking for updates...
	I1120 21:31:35.474289  864472 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:31:35.477098  864472 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 21:31:35.479894  864472 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 21:31:35.482635  864472 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:31:35.485479  864472 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:31:35.488820  864472 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:31:35.489421  864472 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:31:35.519075  864472 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:31:35.519184  864472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:31:35.585489  864472 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:31:35.575503821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:31:35.585621  864472 docker.go:319] overlay module found
	I1120 21:31:35.590691  864472 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1120 21:31:35.593495  864472 start.go:309] selected driver: docker
	I1120 21:31:35.593516  864472 start.go:930] validating driver "docker" against &{Name:functional-038709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-038709 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:31:35.593673  864472 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:31:35.597001  864472 out.go:203] 
	W1120 21:31:35.599794  864472 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1120 21:31:35.602553  864472 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [108fa340-31de-4833-ab01-d9f4c7f1ca44] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004128869s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-038709 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-038709 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-038709 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-038709 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8787b62a-089d-4cba-bb53-cca82f69b182] Pending
helpers_test.go:352: "sp-pod" [8787b62a-089d-4cba-bb53-cca82f69b182] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8787b62a-089d-4cba-bb53-cca82f69b182] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.006667988s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-038709 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-038709 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-038709 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a55b3600-8de9-4f40-8f1a-a8b975832a28] Pending
helpers_test.go:352: "sp-pod" [a55b3600-8de9-4f40-8f1a-a8b975832a28] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a55b3600-8de9-4f40-8f1a-a8b975832a28] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004080441s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-038709 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh -n functional-038709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 cp functional-038709:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2497971392/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh -n functional-038709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh -n functional-038709 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/836852/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo cat /etc/test/nested/copy/836852/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/836852.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo cat /etc/ssl/certs/836852.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/836852.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo cat /usr/share/ca-certificates/836852.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8368522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo cat /etc/ssl/certs/8368522.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8368522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo cat /usr/share/ca-certificates/8368522.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-038709 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 ssh "sudo systemctl is-active docker": exit status 1 (276.849745ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 ssh "sudo systemctl is-active containerd": exit status 1 (280.475465ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2293: (dbg) Done: out/minikube-linux-arm64 license: (2.213786364s)
--- PASS: TestFunctional/parallel/License (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-038709 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-038709 image ls --format short --alsologtostderr:
I1120 21:31:46.672818  865012 out.go:360] Setting OutFile to fd 1 ...
I1120 21:31:46.673005  865012 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:46.673035  865012 out.go:374] Setting ErrFile to fd 2...
I1120 21:31:46.673058  865012 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:46.673333  865012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
I1120 21:31:46.673986  865012 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:46.674154  865012 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:46.674705  865012 cli_runner.go:164] Run: docker container inspect functional-038709 --format={{.State.Status}}
I1120 21:31:46.692732  865012 ssh_runner.go:195] Run: systemctl --version
I1120 21:31:46.692787  865012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-038709
I1120 21:31:46.711153  865012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/functional-038709/id_rsa Username:docker}
I1120 21:31:46.817712  865012 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-038709 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-038709  │ c89507e599efb │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-038709 image ls --format table --alsologtostderr:
I1120 21:31:51.273660  865486 out.go:360] Setting OutFile to fd 1 ...
I1120 21:31:51.273791  865486 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:51.273802  865486 out.go:374] Setting ErrFile to fd 2...
I1120 21:31:51.273807  865486 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:51.274074  865486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
I1120 21:31:51.274794  865486 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:51.274948  865486 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:51.275607  865486 cli_runner.go:164] Run: docker container inspect functional-038709 --format={{.State.Status}}
I1120 21:31:51.294916  865486 ssh_runner.go:195] Run: systemctl --version
I1120 21:31:51.295012  865486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-038709
I1120 21:31:51.314158  865486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/functional-038709/id_rsa Username:docker}
I1120 21:31:51.414512  865486 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-038709 image ls --format json --alsologtostderr:
[{"id":"fdb429158c76bc023d9719a996b58a1cb72771ce4824d5f60d83bc83493254fc","repoDigests":["docker.io/library/2a9b63a6039dc4b9d1444fa2f3818e71b105d859116be06e511a384a1ad18f21-tmp@sha256:a83f8cc2857e5aca77d1e5ee0300c548d2f5ba42816dff4a1422c46763a1b18c"],"repoTags":[],"size":"1638177"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca
9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/
kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"c89507e599efb0b08c21452db47a99369cdb3cb205ccab0e8941a4fb74930bc8","repoDigests":["localhost/my-image@sha256:65ee2231966ad77fb1a1bebba7b1e4d6b29f0b9fd6f8d9e13af7020c5b4f7f76"],"repoTags":["localhost/my-image:functional-038709"],"size":"1640790"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v
1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7
a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef
4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf
5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-038709 image ls --format json --alsologtostderr:
I1120 21:31:51.037044  865449 out.go:360] Setting OutFile to fd 1 ...
I1120 21:31:51.037161  865449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:51.037171  865449 out.go:374] Setting ErrFile to fd 2...
I1120 21:31:51.037176  865449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:51.037548  865449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
I1120 21:31:51.038481  865449 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:51.038631  865449 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:51.039319  865449 cli_runner.go:164] Run: docker container inspect functional-038709 --format={{.State.Status}}
I1120 21:31:51.058163  865449 ssh_runner.go:195] Run: systemctl --version
I1120 21:31:51.058234  865449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-038709
I1120 21:31:51.077871  865449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/functional-038709/id_rsa Username:docker}
I1120 21:31:51.182554  865449 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-038709 image ls --format yaml --alsologtostderr:
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-038709 image ls --format yaml --alsologtostderr:
I1120 21:31:46.902013  865048 out.go:360] Setting OutFile to fd 1 ...
I1120 21:31:46.902127  865048 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:46.902139  865048 out.go:374] Setting ErrFile to fd 2...
I1120 21:31:46.902144  865048 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:46.902521  865048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
I1120 21:31:46.903836  865048 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:46.904070  865048 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:46.904586  865048 cli_runner.go:164] Run: docker container inspect functional-038709 --format={{.State.Status}}
I1120 21:31:46.923421  865048 ssh_runner.go:195] Run: systemctl --version
I1120 21:31:46.923483  865048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-038709
I1120 21:31:46.941110  865048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/functional-038709/id_rsa Username:docker}
I1120 21:31:47.041772  865048 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 ssh pgrep buildkitd: exit status 1 (276.163681ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image build -t localhost/my-image:functional-038709 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-038709 image build -t localhost/my-image:functional-038709 testdata/build --alsologtostderr: (3.395546501s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-038709 image build -t localhost/my-image:functional-038709 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fdb429158c7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-038709
--> c89507e599e
Successfully tagged localhost/my-image:functional-038709
c89507e599efb0b08c21452db47a99369cdb3cb205ccab0e8941a4fb74930bc8
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-038709 image build -t localhost/my-image:functional-038709 testdata/build --alsologtostderr:
I1120 21:31:47.401112  865143 out.go:360] Setting OutFile to fd 1 ...
I1120 21:31:47.401907  865143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:47.401921  865143 out.go:374] Setting ErrFile to fd 2...
I1120 21:31:47.401926  865143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 21:31:47.402239  865143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
I1120 21:31:47.402930  865143 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:47.403652  865143 config.go:182] Loaded profile config "functional-038709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 21:31:47.404225  865143 cli_runner.go:164] Run: docker container inspect functional-038709 --format={{.State.Status}}
I1120 21:31:47.423228  865143 ssh_runner.go:195] Run: systemctl --version
I1120 21:31:47.423288  865143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-038709
I1120 21:31:47.440740  865143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/functional-038709/id_rsa Username:docker}
I1120 21:31:47.541440  865143 build_images.go:162] Building image from path: /tmp/build.324083442.tar
I1120 21:31:47.541534  865143 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1120 21:31:47.549297  865143 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.324083442.tar
I1120 21:31:47.552966  865143 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.324083442.tar: stat -c "%s %y" /var/lib/minikube/build/build.324083442.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.324083442.tar': No such file or directory
I1120 21:31:47.552996  865143 ssh_runner.go:362] scp /tmp/build.324083442.tar --> /var/lib/minikube/build/build.324083442.tar (3072 bytes)
I1120 21:31:47.571112  865143 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.324083442
I1120 21:31:47.579458  865143 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.324083442 -xf /var/lib/minikube/build/build.324083442.tar
I1120 21:31:47.589267  865143 crio.go:315] Building image: /var/lib/minikube/build/build.324083442
I1120 21:31:47.589393  865143 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-038709 /var/lib/minikube/build/build.324083442 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1120 21:31:50.721487  865143 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-038709 /var/lib/minikube/build/build.324083442 --cgroup-manager=cgroupfs: (3.1320632s)
I1120 21:31:50.721559  865143 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.324083442
I1120 21:31:50.730628  865143 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.324083442.tar
I1120 21:31:50.739890  865143 build_images.go:218] Built localhost/my-image:functional-038709 from /tmp/build.324083442.tar
I1120 21:31:50.739921  865143 build_images.go:134] succeeded building to: functional-038709
I1120 21:31:50.739927  865143 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-038709
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image rm kicbase/echo-server:functional-038709 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-038709 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-038709 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-038709 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-038709 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 860767: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-038709 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-038709 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [377025ab-ee8a-4e2b-9551-7a68c4b9a242] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1120 21:21:22.443380  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "nginx-svc" [377025ab-ee8a-4e2b-9551-7a68c4b9a242] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004940462s
I1120 21:21:28.574525  836852 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-038709 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.56.153 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-038709 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 service list -o json
functional_test.go:1504: Took "510.676056ms" to run "out/minikube-linux-arm64 -p functional-038709 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "350.824366ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.778195ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "358.011631ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "57.683372ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdany-port3403205366/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763674283729423975" to /tmp/TestFunctionalparallelMountCmdany-port3403205366/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763674283729423975" to /tmp/TestFunctionalparallelMountCmdany-port3403205366/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763674283729423975" to /tmp/TestFunctionalparallelMountCmdany-port3403205366/001/test-1763674283729423975
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.129138ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 21:31:24.083819  836852 retry.go:31] will retry after 322.21645ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 20 21:31 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 20 21:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 20 21:31 test-1763674283729423975
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh cat /mount-9p/test-1763674283729423975
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-038709 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [14422099-3edc-4a34-81b4-799a4c5ce2c4] Pending
helpers_test.go:352: "busybox-mount" [14422099-3edc-4a34-81b4-799a4c5ce2c4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [14422099-3edc-4a34-81b4-799a4c5ce2c4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [14422099-3edc-4a34-81b4-799a4c5ce2c4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003394659s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-038709 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdany-port3403205366/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdspecific-port1271893012/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.034882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 21:31:31.835993  836852 retry.go:31] will retry after 658.78713ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdspecific-port1271893012/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-038709 ssh "sudo umount -f /mount-9p": exit status 1 (293.688791ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-038709 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdspecific-port1271893012/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1578888124/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1578888124/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1578888124/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-038709 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-038709 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1578888124/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1578888124/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-038709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1578888124/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-038709
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-038709
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-038709
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (212.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1120 21:33:38.577749  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:35:01.647348  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m31.751649472s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (212.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 kubectl -- rollout status deployment/busybox: (5.553824s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-hqh2f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-mgvhj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-wfkjx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-hqh2f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-mgvhj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-wfkjx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-hqh2f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-mgvhj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-wfkjx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-hqh2f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-hqh2f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-mgvhj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-mgvhj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-wfkjx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 kubectl -- exec busybox-7b57f96db7-wfkjx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 node add --alsologtostderr -v 5
E1120 21:36:15.820269  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:15.826627  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:15.838027  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:15.859486  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:15.900886  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:15.982435  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:16.144014  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:16.465842  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:17.107913  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:18.389346  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:20.952050  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:26.074065  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:36:36.316260  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 node add --alsologtostderr -v 5: (59.84738219s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5: (1.11634818s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-409851 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.076248186s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 status --output json --alsologtostderr -v 5: (1.160561325s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp testdata/cp-test.txt ha-409851:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668750254/001/cp-test_ha-409851.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851:/home/docker/cp-test.txt ha-409851-m02:/home/docker/cp-test_ha-409851_ha-409851-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m02 "sudo cat /home/docker/cp-test_ha-409851_ha-409851-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851:/home/docker/cp-test.txt ha-409851-m03:/home/docker/cp-test_ha-409851_ha-409851-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m03 "sudo cat /home/docker/cp-test_ha-409851_ha-409851-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851:/home/docker/cp-test.txt ha-409851-m04:/home/docker/cp-test_ha-409851_ha-409851-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m04 "sudo cat /home/docker/cp-test_ha-409851_ha-409851-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp testdata/cp-test.txt ha-409851-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668750254/001/cp-test_ha-409851-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m02:/home/docker/cp-test.txt ha-409851:/home/docker/cp-test_ha-409851-m02_ha-409851.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851 "sudo cat /home/docker/cp-test_ha-409851-m02_ha-409851.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m02:/home/docker/cp-test.txt ha-409851-m03:/home/docker/cp-test_ha-409851-m02_ha-409851-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m03 "sudo cat /home/docker/cp-test_ha-409851-m02_ha-409851-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m02:/home/docker/cp-test.txt ha-409851-m04:/home/docker/cp-test_ha-409851-m02_ha-409851-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m04 "sudo cat /home/docker/cp-test_ha-409851-m02_ha-409851-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp testdata/cp-test.txt ha-409851-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668750254/001/cp-test_ha-409851-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m03:/home/docker/cp-test.txt ha-409851:/home/docker/cp-test_ha-409851-m03_ha-409851.txt
E1120 21:36:56.798009  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851 "sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m03:/home/docker/cp-test.txt ha-409851-m02:/home/docker/cp-test_ha-409851-m03_ha-409851-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m02 "sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m03:/home/docker/cp-test.txt ha-409851-m04:/home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m04 "sudo cat /home/docker/cp-test_ha-409851-m03_ha-409851-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp testdata/cp-test.txt ha-409851-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668750254/001/cp-test_ha-409851-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851:/home/docker/cp-test_ha-409851-m04_ha-409851.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851 "sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m02:/home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m02 "sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 cp ha-409851-m04:/home/docker/cp-test.txt ha-409851-m03:/home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 ssh -n ha-409851-m03 "sudo cat /home/docker/cp-test_ha-409851-m04_ha-409851-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 node stop m02 --alsologtostderr -v 5: (12.034423419s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5: exit status 7 (758.63592ms)

                                                
                                                
-- stdout --
	ha-409851
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-409851-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409851-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-409851-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:37:17.007394  880634 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:37:17.007573  880634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:37:17.007585  880634 out.go:374] Setting ErrFile to fd 2...
	I1120 21:37:17.007591  880634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:37:17.007979  880634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:37:17.008197  880634 out.go:368] Setting JSON to false
	I1120 21:37:17.008233  880634 mustload.go:66] Loading cluster: ha-409851
	I1120 21:37:17.008328  880634 notify.go:221] Checking for updates...
	I1120 21:37:17.008659  880634 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:37:17.008678  880634 status.go:174] checking status of ha-409851 ...
	I1120 21:37:17.009533  880634 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:37:17.028425  880634 status.go:371] ha-409851 host status = "Running" (err=<nil>)
	I1120 21:37:17.028448  880634 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:37:17.028752  880634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851
	I1120 21:37:17.048976  880634 host.go:66] Checking if "ha-409851" exists ...
	I1120 21:37:17.049292  880634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:37:17.049338  880634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851
	I1120 21:37:17.069977  880634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33892 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851/id_rsa Username:docker}
	I1120 21:37:17.185126  880634 ssh_runner.go:195] Run: systemctl --version
	I1120 21:37:17.191955  880634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:37:17.204826  880634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:37:17.268993  880634 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-20 21:37:17.258348702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:37:17.269594  880634 kubeconfig.go:125] found "ha-409851" server: "https://192.168.49.254:8443"
	I1120 21:37:17.269633  880634 api_server.go:166] Checking apiserver status ...
	I1120 21:37:17.269684  880634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:37:17.281862  880634 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1242/cgroup
	I1120 21:37:17.290595  880634 api_server.go:182] apiserver freezer: "7:freezer:/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio/crio-fbad7ffe88cec3df85b4ef65dd2cbd7ee3f9cef3eae091aa81ad3c445cfec443"
	I1120 21:37:17.290672  880634 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d20916d298c99b043596ab6bb765520bf7a9119134d9373bbc61ff2ec5ffd853/crio/crio-fbad7ffe88cec3df85b4ef65dd2cbd7ee3f9cef3eae091aa81ad3c445cfec443/freezer.state
	I1120 21:37:17.298844  880634 api_server.go:204] freezer state: "THAWED"
	I1120 21:37:17.298888  880634 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1120 21:37:17.307415  880634 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1120 21:37:17.307442  880634 status.go:463] ha-409851 apiserver status = Running (err=<nil>)
	I1120 21:37:17.307466  880634 status.go:176] ha-409851 status: &{Name:ha-409851 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:37:17.307483  880634 status.go:174] checking status of ha-409851-m02 ...
	I1120 21:37:17.307794  880634 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:37:17.325594  880634 status.go:371] ha-409851-m02 host status = "Stopped" (err=<nil>)
	I1120 21:37:17.325620  880634 status.go:384] host is not running, skipping remaining checks
	I1120 21:37:17.325627  880634 status.go:176] ha-409851-m02 status: &{Name:ha-409851-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:37:17.325649  880634 status.go:174] checking status of ha-409851-m03 ...
	I1120 21:37:17.325973  880634 cli_runner.go:164] Run: docker container inspect ha-409851-m03 --format={{.State.Status}}
	I1120 21:37:17.345033  880634 status.go:371] ha-409851-m03 host status = "Running" (err=<nil>)
	I1120 21:37:17.345060  880634 host.go:66] Checking if "ha-409851-m03" exists ...
	I1120 21:37:17.345373  880634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m03
	I1120 21:37:17.363197  880634 host.go:66] Checking if "ha-409851-m03" exists ...
	I1120 21:37:17.363515  880634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:37:17.363710  880634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m03
	I1120 21:37:17.380982  880634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m03/id_rsa Username:docker}
	I1120 21:37:17.480662  880634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:37:17.494262  880634 kubeconfig.go:125] found "ha-409851" server: "https://192.168.49.254:8443"
	I1120 21:37:17.494292  880634 api_server.go:166] Checking apiserver status ...
	I1120 21:37:17.494343  880634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:37:17.506256  880634 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1195/cgroup
	I1120 21:37:17.515892  880634 api_server.go:182] apiserver freezer: "7:freezer:/docker/58308eaea781bea871a5e23f5a856165d8e798310a7495efe422468c5df5af1f/crio/crio-285c7efcbffd99cbbbe5ce0c980c6f8b8ffa6dd459bed6b779edfeee0b0a3a5c"
	I1120 21:37:17.515962  880634 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/58308eaea781bea871a5e23f5a856165d8e798310a7495efe422468c5df5af1f/crio/crio-285c7efcbffd99cbbbe5ce0c980c6f8b8ffa6dd459bed6b779edfeee0b0a3a5c/freezer.state
	I1120 21:37:17.525421  880634 api_server.go:204] freezer state: "THAWED"
	I1120 21:37:17.525447  880634 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1120 21:37:17.533644  880634 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1120 21:37:17.533672  880634 status.go:463] ha-409851-m03 apiserver status = Running (err=<nil>)
	I1120 21:37:17.533682  880634 status.go:176] ha-409851-m03 status: &{Name:ha-409851-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:37:17.533699  880634 status.go:174] checking status of ha-409851-m04 ...
	I1120 21:37:17.534005  880634 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:37:17.553053  880634 status.go:371] ha-409851-m04 host status = "Running" (err=<nil>)
	I1120 21:37:17.553081  880634 host.go:66] Checking if "ha-409851-m04" exists ...
	I1120 21:37:17.553396  880634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409851-m04
	I1120 21:37:17.570500  880634 host.go:66] Checking if "ha-409851-m04" exists ...
	I1120 21:37:17.570950  880634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:37:17.571050  880634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409851-m04
	I1120 21:37:17.588478  880634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33907 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/ha-409851-m04/id_rsa Username:docker}
	I1120 21:37:17.688023  880634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:37:17.702317  880634 status.go:176] ha-409851-m04 status: &{Name:ha-409851-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 node start m02 --alsologtostderr -v 5
E1120 21:37:37.759424  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 node start m02 --alsologtostderr -v 5: (31.877260238s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5: (1.197541163s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.259849949s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 node delete m03 --alsologtostderr -v 5: (10.58244038s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 stop --alsologtostderr -v 5: (36.250165316s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5: exit status 7 (114.44351ms)

                                                
                                                
-- stdout --
	ha-409851
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409851-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409851-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:46:12.675719  893787 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:46:12.675944  893787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.675975  893787 out.go:374] Setting ErrFile to fd 2...
	I1120 21:46:12.675998  893787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:46:12.676288  893787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 21:46:12.676502  893787 out.go:368] Setting JSON to false
	I1120 21:46:12.676563  893787 mustload.go:66] Loading cluster: ha-409851
	I1120 21:46:12.676638  893787 notify.go:221] Checking for updates...
	I1120 21:46:12.676984  893787 config.go:182] Loaded profile config "ha-409851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:46:12.677029  893787 status.go:174] checking status of ha-409851 ...
	I1120 21:46:12.677933  893787 cli_runner.go:164] Run: docker container inspect ha-409851 --format={{.State.Status}}
	I1120 21:46:12.697524  893787 status.go:371] ha-409851 host status = "Stopped" (err=<nil>)
	I1120 21:46:12.697544  893787 status.go:384] host is not running, skipping remaining checks
	I1120 21:46:12.697550  893787 status.go:176] ha-409851 status: &{Name:ha-409851 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:46:12.697582  893787 status.go:174] checking status of ha-409851-m02 ...
	I1120 21:46:12.697878  893787 cli_runner.go:164] Run: docker container inspect ha-409851-m02 --format={{.State.Status}}
	I1120 21:46:12.723094  893787 status.go:371] ha-409851-m02 host status = "Stopped" (err=<nil>)
	I1120 21:46:12.723122  893787 status.go:384] host is not running, skipping remaining checks
	I1120 21:46:12.723130  893787 status.go:176] ha-409851-m02 status: &{Name:ha-409851-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:46:12.723153  893787 status.go:174] checking status of ha-409851-m04 ...
	I1120 21:46:12.723441  893787 cli_runner.go:164] Run: docker container inspect ha-409851-m04 --format={{.State.Status}}
	I1120 21:46:12.742415  893787 status.go:371] ha-409851-m04 host status = "Stopped" (err=<nil>)
	I1120 21:46:12.742438  893787 status.go:384] host is not running, skipping remaining checks
	I1120 21:46:12.742445  893787 status.go:176] ha-409851-m04 status: &{Name:ha-409851-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 node add --control-plane --alsologtostderr -v 5
E1120 21:52:38.885167  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:53:38.577758  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 node add --control-plane --alsologtostderr -v 5: (1m19.013068548s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-409851 status --alsologtostderr -v 5: (1.182018714s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.20s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-515527 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-515527 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.25881313s)
--- PASS: TestJSONOutput/start/Command (82.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-515527 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-515527 --output=json --user=testUser: (5.904699524s)
--- PASS: TestJSONOutput/stop/Command (5.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-262139 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-262139 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (97.254309ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9011f7ce-c860-47ea-b082-622b28d2ab90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-262139] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3d4c08c-83db-448a-b696-670ac8eeba7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21923"}}
	{"specversion":"1.0","id":"ff900303-620e-4aa3-a8fe-c6be03b58732","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4f67eee1-f61e-431b-b884-75fc094148d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig"}}
	{"specversion":"1.0","id":"4ff26393-b840-4316-beac-47aba8b11328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube"}}
	{"specversion":"1.0","id":"c7f47336-69c9-4d14-964e-377190b6de38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"210810f1-e620-4f0f-9e2c-3e7719e5ec11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ad2a18a1-7599-48d8-bd15-0578313d9e1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-262139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-262139
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (74.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-285999 --network=
E1120 21:56:15.819995  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-285999 --network=: (1m12.37651606s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-285999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-285999
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-285999: (2.169672532s)
--- PASS: TestKicCustomNetwork/create_custom_network (74.57s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-441695 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-441695 --network=bridge: (29.818662844s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-441695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-441695
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-441695: (2.065576832s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.91s)

                                                
                                    
x
+
TestKicExistingNetwork (32.26s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1120 21:57:24.767226  836852 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1120 21:57:24.783029  836852 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1120 21:57:24.783997  836852 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1120 21:57:24.784037  836852 cli_runner.go:164] Run: docker network inspect existing-network
W1120 21:57:24.800521  836852 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1120 21:57:24.800551  836852 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1120 21:57:24.800570  836852 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1120 21:57:24.800669  836852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1120 21:57:24.818755  836852 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ad232b357b1b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:e5:2b:94:2e:bb} reservation:<nil>}
I1120 21:57:24.819157  836852 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a8d40}
I1120 21:57:24.819190  836852 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1120 21:57:24.819244  836852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1120 21:57:24.890777  836852 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-811921 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-811921 --network=existing-network: (29.936586527s)
helpers_test.go:175: Cleaning up "existing-network-811921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-811921
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-811921: (2.163336599s)
I1120 21:57:57.007481  836852 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.26s)

                                                
                                    
x
+
TestKicCustomSubnet (37.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-912376 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-912376 --subnet=192.168.60.0/24: (34.823985649s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-912376 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-912376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-912376
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-912376: (2.29576772s)
--- PASS: TestKicCustomSubnet (37.15s)

                                                
                                    
x
+
TestKicStaticIP (39.08s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-339162 --static-ip=192.168.200.200
E1120 21:58:38.578017  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-339162 --static-ip=192.168.200.200: (36.643387937s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-339162 ip
helpers_test.go:175: Cleaning up "static-ip-339162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-339162
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-339162: (2.271181367s)
--- PASS: TestKicStaticIP (39.08s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (81.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-781812 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-781812 --driver=docker  --container-runtime=crio: (38.47360436s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-784462 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-784462 --driver=docker  --container-runtime=crio: (37.165615794s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-781812
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-784462
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-784462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-784462
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-784462: (2.115929695s)
helpers_test.go:175: Cleaning up "first-781812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-781812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-781812: (2.06643223s)
--- PASS: TestMinikubeProfile (81.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-981188 --memory=3072 --mount-string /tmp/TestMountStartserial521657530/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-981188 --memory=3072 --mount-string /tmp/TestMountStartserial521657530/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.417665549s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-981188 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-983190 --memory=3072 --mount-string /tmp/TestMountStartserial521657530/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-983190 --memory=3072 --mount-string /tmp/TestMountStartserial521657530/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.364843451s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-983190 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-981188 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-981188 --alsologtostderr -v=5: (1.742692532s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-983190 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-983190
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-983190: (1.298273501s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-983190
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-983190: (7.356034857s)
--- PASS: TestMountStart/serial/RestartStopped (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-983190 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (142.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-640042 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1120 22:01:15.819954  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-640042 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m22.061347635s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (142.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-640042 -- rollout status deployment/busybox: (5.073366434s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-7zcsd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-npgxm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-7zcsd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-npgxm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-7zcsd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-npgxm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-7zcsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-7zcsd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-npgxm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-640042 -- exec busybox-7b57f96db7-npgxm -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-640042 -v=5 --alsologtostderr
E1120 22:03:38.578438  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-640042 -v=5 --alsologtostderr: (57.252953991s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-640042 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp testdata/cp-test.txt multinode-640042:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp multinode-640042:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1340958169/001/cp-test_multinode-640042.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp multinode-640042:/home/docker/cp-test.txt multinode-640042-m02:/home/docker/cp-test_multinode-640042_multinode-640042-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m02 "sudo cat /home/docker/cp-test_multinode-640042_multinode-640042-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp multinode-640042:/home/docker/cp-test.txt multinode-640042-m03:/home/docker/cp-test_multinode-640042_multinode-640042-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m03 "sudo cat /home/docker/cp-test_multinode-640042_multinode-640042-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp testdata/cp-test.txt multinode-640042-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp multinode-640042-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1340958169/001/cp-test_multinode-640042-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp multinode-640042-m02:/home/docker/cp-test.txt multinode-640042:/home/docker/cp-test_multinode-640042-m02_multinode-640042.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042 "sudo cat /home/docker/cp-test_multinode-640042-m02_multinode-640042.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp multinode-640042-m02:/home/docker/cp-test.txt multinode-640042-m03:/home/docker/cp-test_multinode-640042-m02_multinode-640042-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m03 "sudo cat /home/docker/cp-test_multinode-640042-m02_multinode-640042-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp testdata/cp-test.txt multinode-640042-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp multinode-640042-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1340958169/001/cp-test_multinode-640042-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp multinode-640042-m03:/home/docker/cp-test.txt multinode-640042:/home/docker/cp-test_multinode-640042-m03_multinode-640042.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042 "sudo cat /home/docker/cp-test_multinode-640042-m03_multinode-640042.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 cp multinode-640042-m03:/home/docker/cp-test.txt multinode-640042-m02:/home/docker/cp-test_multinode-640042-m03_multinode-640042-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 ssh -n multinode-640042-m02 "sudo cat /home/docker/cp-test_multinode-640042-m03_multinode-640042-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-640042 node stop m03: (1.327469385s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-640042 status: exit status 7 (538.37263ms)

                                                
                                                
-- stdout --
	multinode-640042
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-640042-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-640042-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-640042 status --alsologtostderr: exit status 7 (529.155874ms)

                                                
                                                
-- stdout --
	multinode-640042
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-640042-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-640042-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 22:04:46.328477  945796 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:04:46.328605  945796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:04:46.328616  945796 out.go:374] Setting ErrFile to fd 2...
	I1120 22:04:46.328621  945796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:04:46.328875  945796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:04:46.329070  945796 out.go:368] Setting JSON to false
	I1120 22:04:46.329101  945796 mustload.go:66] Loading cluster: multinode-640042
	I1120 22:04:46.329144  945796 notify.go:221] Checking for updates...
	I1120 22:04:46.329489  945796 config.go:182] Loaded profile config "multinode-640042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:04:46.329506  945796 status.go:174] checking status of multinode-640042 ...
	I1120 22:04:46.330046  945796 cli_runner.go:164] Run: docker container inspect multinode-640042 --format={{.State.Status}}
	I1120 22:04:46.351483  945796 status.go:371] multinode-640042 host status = "Running" (err=<nil>)
	I1120 22:04:46.351505  945796 host.go:66] Checking if "multinode-640042" exists ...
	I1120 22:04:46.351979  945796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-640042
	I1120 22:04:46.381082  945796 host.go:66] Checking if "multinode-640042" exists ...
	I1120 22:04:46.381384  945796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:04:46.381430  945796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-640042
	I1120 22:04:46.398889  945796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34012 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/multinode-640042/id_rsa Username:docker}
	I1120 22:04:46.496260  945796 ssh_runner.go:195] Run: systemctl --version
	I1120 22:04:46.502808  945796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:04:46.516070  945796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:04:46.584619  945796 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 22:04:46.575401052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:04:46.585275  945796 kubeconfig.go:125] found "multinode-640042" server: "https://192.168.67.2:8443"
	I1120 22:04:46.585316  945796 api_server.go:166] Checking apiserver status ...
	I1120 22:04:46.585363  945796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 22:04:46.596605  945796 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup
	I1120 22:04:46.604889  945796 api_server.go:182] apiserver freezer: "7:freezer:/docker/9efcfab04b9dcbc6289f25a5e1eecdd840433a108e4688a3f84731ecca0580ee/crio/crio-864d1e2077d1fc02b475e47466135f5ebe71501e650ec398b321424238185d55"
	I1120 22:04:46.604955  945796 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9efcfab04b9dcbc6289f25a5e1eecdd840433a108e4688a3f84731ecca0580ee/crio/crio-864d1e2077d1fc02b475e47466135f5ebe71501e650ec398b321424238185d55/freezer.state
	I1120 22:04:46.612514  945796 api_server.go:204] freezer state: "THAWED"
	I1120 22:04:46.612545  945796 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1120 22:04:46.622025  945796 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1120 22:04:46.622058  945796 status.go:463] multinode-640042 apiserver status = Running (err=<nil>)
	I1120 22:04:46.622070  945796 status.go:176] multinode-640042 status: &{Name:multinode-640042 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 22:04:46.622087  945796 status.go:174] checking status of multinode-640042-m02 ...
	I1120 22:04:46.622444  945796 cli_runner.go:164] Run: docker container inspect multinode-640042-m02 --format={{.State.Status}}
	I1120 22:04:46.639085  945796 status.go:371] multinode-640042-m02 host status = "Running" (err=<nil>)
	I1120 22:04:46.639118  945796 host.go:66] Checking if "multinode-640042-m02" exists ...
	I1120 22:04:46.639415  945796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-640042-m02
	I1120 22:04:46.655982  945796 host.go:66] Checking if "multinode-640042-m02" exists ...
	I1120 22:04:46.656313  945796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 22:04:46.656363  945796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-640042-m02
	I1120 22:04:46.673142  945796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34017 SSHKeyPath:/home/jenkins/minikube-integration/21923-834992/.minikube/machines/multinode-640042-m02/id_rsa Username:docker}
	I1120 22:04:46.772403  945796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 22:04:46.785051  945796 status.go:176] multinode-640042-m02 status: &{Name:multinode-640042-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1120 22:04:46.785085  945796 status.go:174] checking status of multinode-640042-m03 ...
	I1120 22:04:46.785398  945796 cli_runner.go:164] Run: docker container inspect multinode-640042-m03 --format={{.State.Status}}
	I1120 22:04:46.808031  945796 status.go:371] multinode-640042-m03 host status = "Stopped" (err=<nil>)
	I1120 22:04:46.808055  945796 status.go:384] host is not running, skipping remaining checks
	I1120 22:04:46.808062  945796 status.go:176] multinode-640042-m03 status: &{Name:multinode-640042-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-640042 node start m03 -v=5 --alsologtostderr: (7.405335307s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-640042
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-640042
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-640042: (25.102093621s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-640042 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-640042 --wait=true -v=5 --alsologtostderr: (48.211160706s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-640042
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-640042 node delete m03: (5.00486682s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 stop
E1120 22:06:15.819811  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-640042 stop: (23.810270426s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-640042 status: exit status 7 (99.983838ms)

                                                
                                                
-- stdout --
	multinode-640042
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-640042-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-640042 status --alsologtostderr: exit status 7 (97.698491ms)

                                                
                                                
-- stdout --
	multinode-640042
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-640042-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 22:06:38.077265  953635 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:06:38.077387  953635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:06:38.077406  953635 out.go:374] Setting ErrFile to fd 2...
	I1120 22:06:38.077415  953635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:06:38.078263  953635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:06:38.078638  953635 out.go:368] Setting JSON to false
	I1120 22:06:38.078708  953635 mustload.go:66] Loading cluster: multinode-640042
	I1120 22:06:38.078828  953635 notify.go:221] Checking for updates...
	I1120 22:06:38.079228  953635 config.go:182] Loaded profile config "multinode-640042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:06:38.079248  953635 status.go:174] checking status of multinode-640042 ...
	I1120 22:06:38.080131  953635 cli_runner.go:164] Run: docker container inspect multinode-640042 --format={{.State.Status}}
	I1120 22:06:38.099358  953635 status.go:371] multinode-640042 host status = "Stopped" (err=<nil>)
	I1120 22:06:38.099378  953635 status.go:384] host is not running, skipping remaining checks
	I1120 22:06:38.099385  953635 status.go:176] multinode-640042 status: &{Name:multinode-640042 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 22:06:38.099418  953635 status.go:174] checking status of multinode-640042-m02 ...
	I1120 22:06:38.099722  953635 cli_runner.go:164] Run: docker container inspect multinode-640042-m02 --format={{.State.Status}}
	I1120 22:06:38.119697  953635 status.go:371] multinode-640042-m02 host status = "Stopped" (err=<nil>)
	I1120 22:06:38.119722  953635 status.go:384] host is not running, skipping remaining checks
	I1120 22:06:38.119728  953635 status.go:176] multinode-640042-m02 status: &{Name:multinode-640042-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-640042 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-640042 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.59685454s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-640042 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.30s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-640042
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-640042-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-640042-m02 --driver=docker  --container-runtime=crio: exit status 14 (105.7729ms)

                                                
                                                
-- stdout --
	* [multinode-640042-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-640042-m02' is duplicated with machine name 'multinode-640042-m02' in profile 'multinode-640042'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-640042-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-640042-m03 --driver=docker  --container-runtime=crio: (32.679742971s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-640042
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-640042: exit status 80 (337.240136ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-640042 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-640042-m03 already exists in multinode-640042-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-640042-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-640042-m03: (2.168241224s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.34s)

                                                
                                    
x
+
TestPreload (158.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-498278 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1120 22:08:21.651216  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:08:38.577782  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-498278 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.693866218s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-498278 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-498278 image pull gcr.io/k8s-minikube/busybox: (2.34117285s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-498278
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-498278: (5.930966056s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-498278 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1120 22:09:18.887111  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-498278 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m25.085650289s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-498278 image list
helpers_test.go:175: Cleaning up "test-preload-498278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-498278
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-498278: (2.49198265s)
--- PASS: TestPreload (158.82s)

                                                
                                    
x
+
TestScheduledStopUnix (111.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-472571 --memory=3072 --driver=docker  --container-runtime=crio
E1120 22:11:15.825293  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-472571 --memory=3072 --driver=docker  --container-runtime=crio: (34.765024575s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472571 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 22:11:19.725758  967754 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:11:19.725986  967754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:11:19.726014  967754 out.go:374] Setting ErrFile to fd 2...
	I1120 22:11:19.726033  967754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:11:19.726308  967754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:11:19.726614  967754 out.go:368] Setting JSON to false
	I1120 22:11:19.726787  967754 mustload.go:66] Loading cluster: scheduled-stop-472571
	I1120 22:11:19.727199  967754 config.go:182] Loaded profile config "scheduled-stop-472571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:11:19.727297  967754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/config.json ...
	I1120 22:11:19.727532  967754 mustload.go:66] Loading cluster: scheduled-stop-472571
	I1120 22:11:19.727694  967754 config.go:182] Loaded profile config "scheduled-stop-472571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-472571 -n scheduled-stop-472571
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472571 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 22:11:20.209198  967844 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:11:20.209408  967844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:11:20.209435  967844 out.go:374] Setting ErrFile to fd 2...
	I1120 22:11:20.209453  967844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:11:20.209788  967844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:11:20.210147  967844 out.go:368] Setting JSON to false
	I1120 22:11:20.211125  967844 daemonize_unix.go:73] killing process 967771 as it is an old scheduled stop
	I1120 22:11:20.214972  967844 mustload.go:66] Loading cluster: scheduled-stop-472571
	I1120 22:11:20.215541  967844 config.go:182] Loaded profile config "scheduled-stop-472571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:11:20.215678  967844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/config.json ...
	I1120 22:11:20.215910  967844 mustload.go:66] Loading cluster: scheduled-stop-472571
	I1120 22:11:20.216076  967844 config.go:182] Loaded profile config "scheduled-stop-472571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1120 22:11:20.221713  836852 retry.go:31] will retry after 107.255µs: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.221898  836852 retry.go:31] will retry after 166.44µs: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.222465  836852 retry.go:31] will retry after 334.389µs: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.223039  836852 retry.go:31] will retry after 310.66µs: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.224152  836852 retry.go:31] will retry after 373.618µs: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.225208  836852 retry.go:31] will retry after 541.762µs: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.226274  836852 retry.go:31] will retry after 653.156µs: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.227389  836852 retry.go:31] will retry after 2.359851ms: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.230603  836852 retry.go:31] will retry after 2.254656ms: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.233818  836852 retry.go:31] will retry after 4.831358ms: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.239133  836852 retry.go:31] will retry after 4.373611ms: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.244364  836852 retry.go:31] will retry after 9.170818ms: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.254629  836852 retry.go:31] will retry after 16.490182ms: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.271857  836852 retry.go:31] will retry after 17.055089ms: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.289024  836852 retry.go:31] will retry after 30.286294ms: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
I1120 22:11:20.320247  836852 retry.go:31] will retry after 47.829525ms: open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472571 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-472571 -n scheduled-stop-472571
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-472571
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472571 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 22:11:46.165136  968206 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:11:46.165315  968206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:11:46.165344  968206 out.go:374] Setting ErrFile to fd 2...
	I1120 22:11:46.165365  968206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:11:46.165670  968206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:11:46.165981  968206 out.go:368] Setting JSON to false
	I1120 22:11:46.166122  968206 mustload.go:66] Loading cluster: scheduled-stop-472571
	I1120 22:11:46.166544  968206 config.go:182] Loaded profile config "scheduled-stop-472571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:11:46.166658  968206 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/scheduled-stop-472571/config.json ...
	I1120 22:11:46.166896  968206 mustload.go:66] Loading cluster: scheduled-stop-472571
	I1120 22:11:46.167087  968206 config.go:182] Loaded profile config "scheduled-stop-472571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-472571
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-472571: exit status 7 (73.672632ms)

                                                
                                                
-- stdout --
	scheduled-stop-472571
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-472571 -n scheduled-stop-472571
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-472571 -n scheduled-stop-472571: exit status 7 (73.38293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-472571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-472571
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-472571: (5.153162216s)
--- PASS: TestScheduledStopUnix (111.56s)

                                                
                                    
x
+
TestInsufficientStorage (13.33s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-540807 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-540807 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.719128618s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"55b04b4f-dfeb-4f00-a19e-e09a2e9ead64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-540807] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"00fdfada-857a-4b15-ae3f-87bc835fa199","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21923"}}
	{"specversion":"1.0","id":"00d43e19-8fab-42e2-84de-83b77b8b5272","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3bc2e8e5-16e0-4775-b315-03a8bb6dc717","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig"}}
	{"specversion":"1.0","id":"c7256b3b-89e4-4af1-b551-30aff619bf05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube"}}
	{"specversion":"1.0","id":"dfeb2a73-17d3-4540-a0ea-8bf4b68ed4f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a5734444-b2c0-400e-88ff-e82223a22e8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"14c55d63-e859-4f15-a7a8-5f268148cd9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0c71cf9f-c2bc-4ac9-a5cc-f328cc3845aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"afb9a8cc-23a1-45a8-8d41-84e139fb5496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0c575b3-5438-4202-87fc-ede2c0d247ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8d2dc8fb-feed-4ce7-8ae6-0c9abb19525e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-540807\" primary control-plane node in \"insufficient-storage-540807\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"71a188a3-8656-4fc8-859c-1bce0beb86e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd5b6a28-355d-4c8d-b063-87cf461f7f69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3183dde4-bf68-474c-a938-f3abf3d7723f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-540807 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-540807 --output=json --layout=cluster: exit status 7 (328.672981ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-540807","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-540807","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1120 22:12:47.508628  969923 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-540807" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-540807 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-540807 --output=json --layout=cluster: exit status 7 (302.679133ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-540807","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-540807","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1120 22:12:47.812437  969990 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-540807" does not appear in /home/jenkins/minikube-integration/21923-834992/kubeconfig
	E1120 22:12:47.823353  969990 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/insufficient-storage-540807/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-540807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-540807
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-540807: (1.97961858s)
--- PASS: TestInsufficientStorage (13.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2967977084 start -p running-upgrade-803505 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1120 22:16:15.820239  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2967977084 start -p running-upgrade-803505 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.239369419s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-803505 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-803505 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.494777008s)
helpers_test.go:175: Cleaning up "running-upgrade-803505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-803505
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-803505: (2.224004339s)
--- PASS: TestRunningBinaryUpgrade (62.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (371.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.467989572s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-410652
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-410652: (1.370618993s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-410652 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-410652 status --format={{.Host}}: exit status 7 (76.21374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.462997721s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-410652 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (150.337884ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-410652] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-410652
	    minikube start -p kubernetes-upgrade-410652 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4106522 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-410652 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-410652 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.58468239s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-410652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-410652
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-410652: (2.155077217s)
--- PASS: TestKubernetesUpgrade (371.41s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.17s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.303365943 start -p missing-upgrade-407986 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.303365943 start -p missing-upgrade-407986 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.94786788s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-407986
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-407986
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-407986 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-407986 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.969488501s)
helpers_test.go:175: Cleaning up "missing-upgrade-407986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-407986
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-407986: (2.120414579s)
--- PASS: TestMissingContainerUpgrade (119.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-787224 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-787224 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.008583ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-787224] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-787224 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-787224 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.341281166s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-787224 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-787224 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-787224 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.943247323s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-787224 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-787224 status -o json: exit status 2 (375.848653ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-787224","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-787224
E1120 22:13:38.578313  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-787224: (2.1594708s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-787224 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-787224 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (11.348922572s)
--- PASS: TestNoKubernetes/serial/Start (11.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21923-834992/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-787224 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-787224 "sudo systemctl is-active --quiet service kubelet": exit status 1 (311.118464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-787224
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-787224: (1.376242263s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-787224 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-787224 --driver=docker  --container-runtime=crio: (7.924575616s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-787224 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-787224 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.003516ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (7.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (7.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.451070682 start -p stopped-upgrade-239493 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.451070682 start -p stopped-upgrade-239493 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.336328606s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.451070682 -p stopped-upgrade-239493 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.451070682 -p stopped-upgrade-239493 stop: (1.238387311s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-239493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-239493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.29366076s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-239493
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-239493: (1.276474133s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                    
x
+
TestPause/serial/Start (81.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-236741 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-236741 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.78234155s)
--- PASS: TestPause/serial/Start (81.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-236741 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1120 22:18:38.578577  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-236741 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.770073595s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-640880 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-640880 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (195.206056ms)

                                                
                                                
-- stdout --
	* [false-640880] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 22:19:47.845299 1008726 out.go:360] Setting OutFile to fd 1 ...
	I1120 22:19:47.845501 1008726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:19:47.845515 1008726 out.go:374] Setting ErrFile to fd 2...
	I1120 22:19:47.845521 1008726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 22:19:47.845833 1008726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-834992/.minikube/bin
	I1120 22:19:47.846309 1008726 out.go:368] Setting JSON to false
	I1120 22:19:47.847294 1008726 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18113,"bootTime":1763659075,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1120 22:19:47.847368 1008726 start.go:143] virtualization:  
	I1120 22:19:47.851046 1008726 out.go:179] * [false-640880] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 22:19:47.855007 1008726 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 22:19:47.855147 1008726 notify.go:221] Checking for updates...
	I1120 22:19:47.860907 1008726 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 22:19:47.863947 1008726 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-834992/kubeconfig
	I1120 22:19:47.866894 1008726 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-834992/.minikube
	I1120 22:19:47.869871 1008726 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 22:19:47.872871 1008726 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 22:19:47.876373 1008726 config.go:182] Loaded profile config "kubernetes-upgrade-410652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 22:19:47.876492 1008726 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 22:19:47.909430 1008726 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 22:19:47.909548 1008726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 22:19:47.972935 1008726 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-20 22:19:47.963568209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 22:19:47.973043 1008726 docker.go:319] overlay module found
	I1120 22:19:47.976166 1008726 out.go:179] * Using the docker driver based on user configuration
	I1120 22:19:47.979091 1008726 start.go:309] selected driver: docker
	I1120 22:19:47.979136 1008726 start.go:930] validating driver "docker" against <nil>
	I1120 22:19:47.979151 1008726 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 22:19:47.982798 1008726 out.go:203] 
	W1120 22:19:47.985851 1008726 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1120 22:19:47.988672 1008726 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-640880 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-640880" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 22:19:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-410652
contexts:
- context:
cluster: kubernetes-upgrade-410652
extensions:
- extension:
last-update: Thu, 20 Nov 2025 22:19:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-410652
name: kubernetes-upgrade-410652
current-context: kubernetes-upgrade-410652
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-410652
user:
client-certificate: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/kubernetes-upgrade-410652/client.crt
client-key: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/kubernetes-upgrade-410652/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-640880

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640880"

                                                
                                                
----------------------- debugLogs end: false-640880 [took: 3.401879771s] --------------------------------
helpers_test.go:175: Cleaning up "false-640880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-640880
--- PASS: TestNetworkPlugins/group/false (3.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m3.512138824s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-443192 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [930e84cf-8f5d-4107-bdf0-ee99b259637f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [930e84cf-8f5d-4107-bdf0-ee99b259637f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004113686s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-443192 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-443192 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-443192 --alsologtostderr -v=3: (11.998754012s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-443192 -n old-k8s-version-443192
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-443192 -n old-k8s-version-443192: exit status 7 (74.950323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-443192 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-443192 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.017378368s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-443192 -n old-k8s-version-443192
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pvh8p" [c6b6317d-6005-4477-ae37-06c8f92438a3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003869733s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pvh8p" [c6b6317d-6005-4477-ae37-06c8f92438a3] Running
E1120 22:23:38.577897  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00328445s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-443192 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-443192 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m28.688217815s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1120 22:25:01.653212  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.777169014s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-559701 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4f7c04d2-4cac-444d-82be-6529560dd56c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4f7c04d2-4cac-444d-82be-6529560dd56c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004469249s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-559701 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-559701 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-559701 --alsologtostderr -v=3: (12.013037289s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701: exit status 7 (86.776336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-559701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-559701 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.431725837s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-559701 -n default-k8s-diff-port-559701
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-270206 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6afd63b7-397f-4631-b006-dd708750d125] Pending
helpers_test.go:352: "busybox" [6afd63b7-397f-4631-b006-dd708750d125] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6afd63b7-397f-4631-b006-dd708750d125] Running
E1120 22:25:58.888415  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004397097s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-270206 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-270206 --alsologtostderr -v=3
E1120 22:26:15.819436  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-270206 --alsologtostderr -v=3: (12.829806414s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270206 -n embed-certs-270206
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270206 -n embed-certs-270206: exit status 7 (71.500858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-270206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-270206 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.072302353s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270206 -n embed-certs-270206
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9r89r" [8a5c5747-a052-47dd-8fb2-01d08cd64913] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003873917s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9r89r" [8a5c5747-a052-47dd-8fb2-01d08cd64913] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01044831s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-559701 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-559701 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.910668487s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8zhp9" [54738609-0716-4bbe-a7c8-f7bf920b502b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004084747s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8zhp9" [54738609-0716-4bbe-a7c8-f7bf920b502b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003473758s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-270206 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-270206 image list --format=json
E1120 22:27:21.351383  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:27:21.357722  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:27:21.369029  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:27:21.390401  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:27:21.431693  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1120 22:27:41.846395  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:28:02.327681  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.465896511s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-135623 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-135623 --alsologtostderr -v=3: (1.402058345s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-135623 -n newest-cni-135623
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-135623 -n newest-cni-135623: exit status 7 (109.431298ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-135623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-135623 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.475644956s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-135623 -n newest-cni-135623
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-041029 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d5c2a308-e94e-47c2-ae54-0a65575a7220] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d5c2a308-e94e-47c2-ae54-0a65575a7220] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004215838s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-041029 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-041029 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-041029 --alsologtostderr -v=3: (12.695716202s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-135623 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.206385546s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041029 -n no-preload-041029
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041029 -n no-preload-041029: exit status 7 (105.653246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-041029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (63.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-041029 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.860100833s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-041029 -n no-preload-041029
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (63.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5fl85" [df232e57-08f8-4065-abe1-33961949ca0f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003220372s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5fl85" [df232e57-08f8-4065-abe1-33961949ca0f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007330589s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-041029 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-041029 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.066308845s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-640880 "pgrep -a kubelet"
I1120 22:30:13.182217  836852 config.go:182] Loaded profile config "auto-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-640880 replace --force -f testdata/netcat-deployment.yaml
I1120 22:30:13.559260  836852 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g4hds" [329baf63-bb53-499c-8bea-076dca0b3de2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g4hds" [329baf63-bb53-499c-8bea-076dca0b3de2] Running
E1120 22:30:22.079823  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:30:22.086305  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:30:22.097799  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:30:22.119293  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:30:22.160969  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:30:22.242413  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:30:22.404685  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:30:22.726245  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:30:23.368253  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004139526s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-640880 exec deployment/netcat -- nslookup kubernetes.default
E1120 22:30:24.649725  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1120 22:31:03.058923  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:31:15.820003  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/functional-038709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m17.432959944s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-wtd49" [df19b694-70b5-46d8-aab1-9066b41bf86f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004021615s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-640880 "pgrep -a kubelet"
I1120 22:31:41.044012  836852 config.go:182] Loaded profile config "kindnet-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-640880 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zmv8t" [696d6b45-1d90-402b-9f7f-2e7be13ccfde] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1120 22:31:44.021035  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zmv8t" [696d6b45-1d90-402b-9f7f-2e7be13ccfde] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004458564s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-640880 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-92qlk" [6278a6dd-200f-47ff-9030-2578cf81e971] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004951744s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-640880 "pgrep -a kubelet"
I1120 22:32:13.790275  836852 config.go:182] Loaded profile config "calico-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-640880 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fx4d7" [5ad92fb3-5486-42a5-8b8a-15cb09cbcdc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fx4d7" [5ad92fb3-5486-42a5-8b8a-15cb09cbcdc4] Running
E1120 22:32:21.351643  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/old-k8s-version-443192/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004260206s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m10.272475071s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-640880 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1120 22:33:05.942548  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:22.128317  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:22.134621  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:22.145933  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:22.167233  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:22.208556  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:22.289897  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:22.451335  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:22.773297  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:23.414739  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:24.696445  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:33:27.258484  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m13.031944508s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-640880 "pgrep -a kubelet"
I1120 22:33:28.109529  836852 config.go:182] Loaded profile config "custom-flannel-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-640880 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dvdn4" [2eeca984-61cc-447b-85f5-8262d702c150] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1120 22:33:32.380013  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-dvdn4" [2eeca984-61cc-447b-85f5-8262d702c150] Running
E1120 22:33:38.578082  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/addons-828342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004631372s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-640880 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1120 22:34:03.103125  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m4.659572578s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-640880 "pgrep -a kubelet"
I1120 22:34:05.475795  836852 config.go:182] Loaded profile config "enable-default-cni-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-640880 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-prmxk" [b93801c0-4042-416b-8345-7bac03e4da95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-prmxk" [b93801c0-4042-416b-8345-7bac03e4da95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004061257s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-640880 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1120 22:34:44.065393  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-640880 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.486559091s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-h6zx8" [4f9002ba-a1be-4518-adbc-500404be3a82] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00317506s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-640880 "pgrep -a kubelet"
I1120 22:35:12.612304  836852 config.go:182] Loaded profile config "flannel-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-640880 replace --force -f testdata/netcat-deployment.yaml
I1120 22:35:12.982148  836852 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f76gf" [09046e1f-fd76-4e5f-97c9-75dae275bda8] Pending
E1120 22:35:13.515387  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:13.521695  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:13.533007  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:13.554343  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:13.595668  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:13.677000  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:13.838414  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:14.160071  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:14.802302  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-f76gf" [09046e1f-fd76-4e5f-97c9-75dae275bda8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1120 22:35:16.084266  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:18.646439  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-f76gf" [09046e1f-fd76-4e5f-97c9-75dae275bda8] Running
E1120 22:35:22.079763  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/default-k8s-diff-port-559701/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 22:35:23.768323  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/auto-640880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.020899244s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-640880 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-640880 "pgrep -a kubelet"
I1120 22:35:59.046623  836852 config.go:182] Loaded profile config "bridge-640880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-640880 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8tqcp" [32568a89-dbc2-49f8-9403-e772f827799e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8tqcp" [32568a89-dbc2-49f8-9403-e772f827799e] Running
E1120 22:36:05.987650  836852 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/no-preload-041029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003639893s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-640880 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-640880 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-294137 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-294137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-294137
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-305138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-305138
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-640880 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-640880" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 22:19:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-410652
contexts:
- context:
cluster: kubernetes-upgrade-410652
extensions:
- extension:
last-update: Thu, 20 Nov 2025 22:19:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-410652
name: kubernetes-upgrade-410652
current-context: kubernetes-upgrade-410652
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-410652
user:
client-certificate: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/kubernetes-upgrade-410652/client.crt
client-key: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/kubernetes-upgrade-410652/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-640880

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640880"

                                                
                                                
----------------------- debugLogs end: kubenet-640880 [took: 3.602748253s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-640880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-640880
--- SKIP: TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-640880 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-640880" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-834992/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 22:19:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-410652
contexts:
- context:
cluster: kubernetes-upgrade-410652
extensions:
- extension:
last-update: Thu, 20 Nov 2025 22:19:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-410652
name: kubernetes-upgrade-410652
current-context: kubernetes-upgrade-410652
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-410652
user:
client-certificate: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/kubernetes-upgrade-410652/client.crt
client-key: /home/jenkins/minikube-integration/21923-834992/.minikube/profiles/kubernetes-upgrade-410652/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-640880

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-640880" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640880"

                                                
                                                
----------------------- debugLogs end: cilium-640880 [took: 4.319638034s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-640880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-640880
--- SKIP: TestNetworkPlugins/group/cilium (4.48s)

                                                
                                    
Copied to clipboard